I ended my last blog post with a more-or-less complete NVMe VIP test-case example, trying to show everything from basic setup to doing an NVM Write followed by a Read. We are going to change gears a bit here, moving from the NVMe commands to some of the VIP features that are available to assist in your testing.
Just to keep this fresh in your mind, we will continue to refer to this diagram:
As we mentioned earlier, the NVMe VIP provides a rich set of feature to assist in testing.
You’ll note in the diagram above a couple of applications living above the PCIE Port Model (Requester, Target/Cmpltr and Driver). These are PCIe Applications that you can use to source (and sink) PCIe traffic (that is not specifically to/from NVMe). In particular:
One important and useful feature of the VIP is built-in error injections. Rather than have to use callbacks and directed testing to cause errors, the NVMe VIP provides a simple – yet very powerful – mechanism to cause errors to be injected. For each “Script…” task available to the user (see the previous posts for details), there is an “Error Injection” argument. This argument can be filled in with various parameters to cause particular error injections to occur for that NVMe command. The particular error injections that are valid for a command are governed by the potential error conditions (per the NVMe specification).
For example, examining the spec for the “Create I/O Submission Queue” command shows us several errors that can result from that command such as “Completion Queue Invalid”, “Invalid Queue Identifier” and “Maximum Queue Size Exceeded”. Rather than create directed tests to cause these, you only need to provide the analogous Error Injection code and several things occur:
No further work is needed by the user to test the error – no callbacks need to be setup, no errors need be suppressed. All is handled cleanly and transparently.
In addition to injection errors at the NVMe layer, you can also provide a protocol error injection. For example, to cause an LCRC error at the PCIe DL layer, the same procedure is used: simply add the error injection parameter for that LCRC and it will occur, check, retry and re-check the transaction. All of this occurs without any user-assistance.
When queues are created in host memory, there is the possibility that the controller will generate an errant memory request and may illegally access the queues. These accesses are caught and flagged by the host’s queue fencing mechanism. The host has an understanding of what operation(s) (i.e. read or write) and what addresses are valid for the controller to access, and will vigilantly watch the controller’s accesses to make sure it doesn’t attempt to (for example) read from a completion queue or write to a submission queue. Queue and queue entry boundaries are similarly checked for validity.
Built-in to the host VIP is a shadow disk which tracks and records block-data writes to the various controllers’ namespaces. Once a valid write occurs, it is committed to the shadow, later read accesses are compared against the shadow data. Although the VIP user certainly has the actual read/write data available to them, there’s no need for them to do data comparison/checking – the NVMe host VIP takes care of this silently and automatically.
Similar to the Shadow Disk, the host also keeps track of the configuration of the controller(s) that are attached to the system. There are several pieces to this:
Hopefully that provided a useful overview of the capabilities that allow the VIP to help you in your testing. More is in store for the new year ahead – if you have any suggestions or feedback, we’d love to hear it.
Thanks again for reading and responding – Happy New Year to you all!
Authored by Eric Peterson