VIP Central

 

NVMe VIP: Verification Features

I ended my last blog post with a more-or-less complete NVMe VIP test-case example, trying to show everything from basic setup to doing an NVM Write followed by a Read. We are going to change gears a bit here, moving from the NVMe commands to some of the VIP features that are available to assist in your testing.

Here’s where you can learn more about Synopsys VC Verification IP for NVMe and for PCIe.

The VIP View again!

Just to keep this fresh in your mind, we will continue to refer to this diagram:

NVMe-VIP-blocks

As we mentioned earlier, the NVMe VIP provides a rich set of feature to assist in testing.

Background Traffic

You’ll note in the diagram above a couple of applications living above the PCIE Port Model (Requester, Target/Cmpltr and Driver). These are PCIe Applications that you can use to source (and sink) PCIe traffic (that is not specifically to/from NVMe). In particular:

  • Driver Application – If you want to generate various types of TLPs (e.g. CfgWr, IORd, MemWr) this application is your tool. The various fields of the TLPs are configurable, and received completions (e.g. from a MemRd request) are checked for validity and correct data. You can also use this facility to configure or monitor your DUT as needed.
  • Target/Completer Application – If a remote endpoint (e.g. your controller DUT) sends (non-NVMe) traffic to this host VIP, the Target application will field that request, turn it around and generate one or more (as appropriate and/or configured) completions back to the endpoint. Timing and packet size control are available as are several callbacks for detailed TLP modifications.
  • Requester Application – This application generates a constant load of TLPs to the destination. It can be used to create background traffic, or cause a load on the target. The traffic rate, size and types are all configurable.

Error Injections

One important and useful feature of the VIP is built-in error injections. Rather than have to use callbacks and directed testing to cause errors, the NVMe VIP provides a simple – yet very powerful – mechanism to cause errors to be injected. For each “Script…” task available to the user (see the previous posts for details), there is an “Error Injection” argument. This argument can be filled in with various parameters to cause particular error injections to occur for that NVMe command. The particular error injections that are valid for a command are governed by the potential error conditions (per the NVMe specification).

For example, examining the spec for the “Create I/O Submission Queue” command shows us several errors that can result from that command such as “Completion Queue Invalid”, “Invalid Queue Identifier” and “Maximum Queue Size Exceeded”. Rather than create directed tests to cause these, you only need to provide the analogous Error Injection code and several things occur:

  • The VIP will look-up the appropriate values to generate to cause the error.
  • Those values will be placed in the appropriate data structure (e.g. submission queue entry).
  • When the error is received, we automatically suppress any warning that may have otherwise been caused (this is an error, after all).
  • If the expected error does not arrive, it will be flagged.
  • The system is then ready to (if desired) re-run the command without the Error Injection.

No further work is needed by the user to test the error – no callbacks need to be setup, no errors need be suppressed. All is handled cleanly and transparently.

In addition to injection errors at the NVMe layer, you can also provide a protocol error injection. For example, to cause an LCRC error at the PCIe DL layer, the same procedure is used: simply add the error injection parameter for that LCRC and it will occur, check, retry and re-check the transaction. All of this occurs without any user-assistance.

Queue Fencing

When queues are created in host memory, there is the possibility that the controller will generate an errant memory request and may illegally access the queues. These accesses are caught and flagged by the host’s queue fencing mechanism. The host has an understanding of what operation(s) (i.e. read or write) and what addresses are valid for the controller to access, and will vigilantly watch the controller’s accesses to make sure it doesn’t attempt to (for example) read from a completion queue or write to a submission queue. Queue and queue entry boundaries are similarly checked for validity.

Shadow Disk

Built-in to the host VIP is a shadow disk which tracks and records block-data writes to the various controllers’ namespaces.  Once a valid write occurs, it is committed to the shadow, later read accesses are compared against the shadow data.   Although the VIP user certainly has the actual read/write data available to them, there’s no need for them to do data comparison/checking – the NVMe host VIP takes care of this silently and automatically.

Controller Configuration Tracking

Similar to the Shadow Disk, the host also keeps track of the configuration of the controller(s) that are attached to the system. There are several pieces to this:

  • Register Tracking – When a controller NVMe register is written-to, the host “snoops” this write, and stores it in a local “register shadow”. Further actions by the VIP can consult this to make sure operations are valid and/or reasonable for the current state of the controller.
  • Identify Tracking – As we saw in our examples (in the last couple episodes), the NVMe protocol has us do both “Identify Controller” and “Identify Namespace” commands to gather controller information. Relevant pieces of this information are also saved for use by the VIP.
  • Feature Tracking – The “Set Features” command is used to configure various elements of the controller – we watch and collect both “Set” and “Get Features” command information (as necessary) to complete the host VIP’s understanding of the controllers’ current configuration and status .

See You Again Soon

Hopefully that provided a useful overview of the capabilities that allow the VIP to help you in your testing. More is in store for the new year ahead – if you have any suggestions or feedback, we’d love to hear it.

Thanks again for reading and responding – Happy New Year to you all!

Authored by Eric Peterson

Here’s where you can learn more about Synopsys VC Verification IP for NVMe and for PCIe.