VIP Central

 

Synopsys NVMe VIP Architecture: The Host Protocol Layers

Our previous post on NVMe was an overview of the NVMe protocol. We will now start looking closer at the VIP-proper, looking initially at the NVMe Host Protocol layers. This will provide an introductory overview of sending commands to the NVMe Controller.

Here’s where you can learn more about Synopsys’ VC Verification IP for NVMe and for PCIe.

 A High-Level View of the VIP

There are several major blocks to the VIP as shown here:

NVMe-VIP-blocks

The NVMe VIP Host Methodology Layers

The UVM Methodology Interface – this allows users and their test-cases to control, monitor and request commands of the NVMe Host VIP via the transaction-oriented UVM methodology. Sequencer and Register models provide access to the VIP.

The NVMe VIP Host Protocol Layers

This implements the NVMe Host-side Protocol – everything from creating data structures (e.g. queues, PRPs and SGLs) in Host Memory to pushing requests into queues to ringing doorbells and fielding interrupts and popping completion queues.

The NVMe Controller VIP

This is the NVMe Controller Model – it responds to the register accesses sent to it, including reads/writes of the various configuration and control registers, handling doorbells, reading and writing Host Memory (to access queues and data) and essentially implementing the NVMe Controller side specification.

In this post, we will be concentrating on the NVMe VIP Host Protocol Layers (in the above Figure, this is the area surrounded by the dashed line.)

Layers of the NVMe VIP Host

Although in the above diagram, the Host Protocol Layer is shown as part of a larger VIP system, it can also be considered as a standalone VIP in its own right. We will, in fact, describe the various layers of the VIP in this way, hiding the UVM methodology, PCIe protocol and the NVMe Controller as much as possible.

A quick review of the NVMe protocol will help us explain the use of the various layers; we’ll go over some examples of NVMe configuration, control and commands, emphasizing those layers that are involved. Here are the layers that we’ll be discussing:

NVMe-layers

There are only three layers we will be dealing directly with, but to start out, we will use a trivial NVMe Test Case to help use explain the function of the various layers. The VIP Host Layer has a simple-to-use Verilog-based command interface – the various NVMe configuration and commands are mapped to Verilog tasks/functions to implement the NVMe Host. Note that although the command interface is easy to use, under the covers this is a full NVMe application and driver layer that handles much of the protocol leg-work where a “BFM” would be out of its league.

Here’s our trivial test-case (we are not showing some of the task arguments or checking error status just as a simplification – our plan here is to describe the VIP functionality in terms of the test-case commands.) On with it…

// We will assume that the PCIe stack is setup and running
bit [63:0] base_addr = 32’h0001_0000;	// Ctlr NVMe BAR base addr
// Tell the host where the controller has its base address
AllocateControllerID(base_addr, ctlr_id, status);
// Create the Admin Completion and Submission Queues
ScriptCreateAdminCplQ(ctlr_id, num_q_entries, status);
ScriptCreateAdminSubQ(ctlr_id, num_q_entries, status);
// Send an Identify Controller Command
data_buf_t #(dword_t) identify_buffer;		// identify data
identify_buffer = new(1024);
ScriptIdentify(ctlr_id, 0, identify_buffer, status);

Ok, enough for now. A few comments on the above test-case – these are the actual tasks to call to accomplish the various configuration and commands (minus a few arguments as mentioned above). The various tasks that start with the word Script are actual NVMe commands. If they don’t start with Script, they are a VIP configuration utility (e.g. AllocateControllerID() ).

All these commands are implemented at the above NVMe Command Layer (denoted in Red in the figures) – this is the Verilog Interface Command Layer.

We start with the AllocateControllerID(base_addr, ctlr_id, status) task call. This generates a request to the NVMe Queueing Layer to build us a data structure that keeps track of our attached controller(s). The returned ctlr_id is used as a “handle” for any communication to that controller. You will note that later NVMe commands (prefixed by Script…) use the ctlr_id to determine the destination of the command. One can call AllocateControllerID() each time for as many controllers as one wants to access, an unique handle will be returned for each.

Once we have gotten the handle to communicate with the Controller, we then can use it – we call ScriptCreateAdminCplQ(ctlr_id, num_q_entries, status) to do several things for us (see diagram below):

  • In the NVMe Queuing Layer, we allocate some memory from the pool of NVMe memory and create a data structure in Host Memory: a Completion Queue of the requested depth.
  • The register transactions for the appropriate (ACQS and ACQB) registers are built. (Note that Admin Queues are built by writing to the Controller’s NVMe register space).
  • The registers are written. This is done by creating appropriate PCIe MemWr TLP transactions in the NVMe Protocol Interface Layer which are then sent to the PCIe Transaction Layer to cause a write to the appropriate register(s) on the controller.

NVMe-ScriptCreateAdmin

The Admin Submission Queues are created analogously with ScriptCreateAdminSubQ(). Note that the host-side memory management is done for you, as well as managing the associated queue head and tail pointers. In addition the VIP is checking the memory accesses to those queues to make sure they follow the spec (e.g. a submission queue should not be written to by the controller.)

Once the Admin Queues have been built, we can now use them to communicate admin NVMe commands to the controller. In this case, we will call the NVMe Identify Controller command, used to gather detailed information about the controller. The ScriptIdentify() task (see figure below) is used for both Identify Controller and (if the first argument is non-zero) and for Identify Namespace. Since the Identify commands return a 4KB (1024 dword) buffer of identify information, we allocate that prior to calling the task.

Since the Identify command requires a memory buffer to exist in host memory (to hold the contents of the Identify data), that is allocated in host memory and passed as a buffer in the submitted command. Once the controller receives the command (by reading the Admin Submission Queue), it executes the Identify command, and uses the underlying (PCIe) transport to move the data from the controller to the host.

NVMe-ScriptIdentify

Once the command has been completed, and the host has retrieved the completion queue entry (and verified the status), the host can then copy that buffer data from host memory to the identify_buffer provided by the user. Note that the VIP is taking care of building all the appropriate data structures and also generates (and responds to) the associated transactions to execute the command while all the time monitoring the protocol.

Summary

We’ve gone over the basic layers and architecture of the Synopsys NVMe VIP, and you should now have an idea of how NVMe commands are sent to the controller via those layers. More detail follows in upcoming episodes, including more VIP details and features, more advanced topics in the NVMe protocol and the use of other Verification Methodologies (such as UVM) to configure, control, monitor and submit commands with the VIP.

Thanks again for browsing, see you next time!

Authored by Eric Peterson

Here’s where you can learn more about Synopsys’ VC Verification IP for NVMe and for PCIe.