HOME    COMMUNITY    BLOGS & FORUMS    VIP Central
VIP Central
 

NVMe VIP: Verification Features

Posted by VIP Experts on January 12th, 2016

I ended my last blog post with a more-or-less complete NVMe VIP test-case example, trying to show everything from basic setup to doing an NVM Write followed by a Read. We are going to change gears a bit here, moving from the NVMe commands to some of the VIP features that are available to assist in your testing.

Here’s where you can learn more about Synopsys VC Verification IP for NVMe and for PCIe.

The VIP View again!

Just to keep this fresh in your mind, we will continue to refer to this diagram:

NVMe-VIP-blocks

As we mentioned earlier, the NVMe VIP provides a rich set of feature to assist in testing.

Background Traffic

You’ll note in the diagram above a couple of applications living above the PCIE Port Model (Requester, Target/Cmpltr and Driver). These are PCIe Applications that you can use to source (and sink) PCIe traffic (that is not specifically to/from NVMe). In particular:

  • Driver Application – If you want to generate various types of TLPs (e.g. CfgWr, IORd, MemWr) this application is your tool. The various fields of the TLPs are configurable, and received completions (e.g. from a MemRd request) are checked for validity and correct data. You can also use this facility to configure or monitor your DUT as needed.
  • Target/Completer Application – If a remote endpoint (e.g. your controller DUT) sends (non-NVMe) traffic to this host VIP, the Target application will field that request, turn it around and generate one or more (as appropriate and/or configured) completions back to the endpoint. Timing and packet size control are available as are several callbacks for detailed TLP modifications.
  • Requester Application – This application generates a constant load of TLPs to the destination. It can be used to create background traffic, or cause a load on the target. The traffic rate, size and types are all configurable.

Error Injections

One important and useful feature of the VIP is built-in error injections. Rather than have to use callbacks and directed testing to cause errors, the NVMe VIP provides a simple – yet very powerful – mechanism to cause errors to be injected. For each “Script…” task available to the user (see the previous posts for details), there is an “Error Injection” argument. This argument can be filled in with various parameters to cause particular error injections to occur for that NVMe command. The particular error injections that are valid for a command are governed by the potential error conditions (per the NVMe specification).

For example, examining the spec for the “Create I/O Submission Queue” command shows us several errors that can result from that command such as “Completion Queue Invalid”, “Invalid Queue Identifier” and “Maximum Queue Size Exceeded”. Rather than create directed tests to cause these, you only need to provide the analogous Error Injection code and several things occur:

  • The VIP will look-up the appropriate values to generate to cause the error.
  • Those values will be placed in the appropriate data structure (e.g. submission queue entry).
  • When the error is received, we automatically suppress any warning that may have otherwise been caused (this is an error, after all).
  • If the expected error does not arrive, it will be flagged.
  • The system is then ready to (if desired) re-run the command without the Error Injection.

No further work is needed by the user to test the error – no callbacks need to be setup, no errors need be suppressed. All is handled cleanly and transparently.

In addition to injection errors at the NVMe layer, you can also provide a protocol error injection. For example, to cause an LCRC error at the PCIe DL layer, the same procedure is used: simply add the error injection parameter for that LCRC and it will occur, check, retry and re-check the transaction. All of this occurs without any user-assistance.

Queue Fencing

When queues are created in host memory, there is the possibility that the controller will generate an errant memory request and may illegally access the queues. These accesses are caught and flagged by the host’s queue fencing mechanism. The host has an understanding of what operation(s) (i.e. read or write) and what addresses are valid for the controller to access, and will vigilantly watch the controller’s accesses to make sure it doesn’t attempt to (for example) read from a completion queue or write to a submission queue. Queue and queue entry boundaries are similarly checked for validity.

Shadow Disk

Built-in to the host VIP is a shadow disk which tracks and records block-data writes to the various controllers’ namespaces.  Once a valid write occurs, it is committed to the shadow, later read accesses are compared against the shadow data.   Although the VIP user certainly has the actual read/write data available to them, there’s no need for them to do data comparison/checking – the NVMe host VIP takes care of this silently and automatically.

Controller Configuration Tracking

Similar to the Shadow Disk, the host also keeps track of the configuration of the controller(s) that are attached to the system. There are several pieces to this:

  • Register Tracking – When a controller NVMe register is written-to, the host “snoops” this write, and stores it in a local “register shadow”. Further actions by the VIP can consult this to make sure operations are valid and/or reasonable for the current state of the controller.
  • Identify Tracking – As we saw in our examples (in the last couple episodes), the NVMe protocol has us do both “Identify Controller” and “Identify Namespace” commands to gather controller information. Relevant pieces of this information are also saved for use by the VIP.
  • Feature Tracking – The “Set Features” command is used to configure various elements of the controller – we watch and collect both “Set” and “Get Features” command information (as necessary) to complete the host VIP’s understanding of the controllers’ current configuration and status .

See You Again Soon

Hopefully that provided a useful overview of the capabilities that allow the VIP to help you in your testing. More is in store for the new year ahead – if you have any suggestions or feedback, we’d love to hear it.

Thanks again for reading and responding – Happy New Year to you all!

Authored by Eric Peterson

Here’s where you can learn more about Synopsys VC Verification IP for NVMe and for PCIe.

Posted in Debug, NVMe, PCIe | No Comments »

USB Power Delivery Days: Meeting Verification Challenges

Posted by VIP Experts on January 5th, 2016

The Universal Serial Bus (USB) had its humble beginnings in the mid-1990s to standardize the connection of computer peripherals to PCs, both to communicate and to supply electric power. Today, it has become commonplace on a variety of devices and appliances, including Smartphones, Smart TVs, Automobiles and video game consoles. USB has effectively replaced a variety of earlier interfaces, such as serial and parallel ports with speeds up to 10GB/s (with USB 3.1), as well as separate power chargers for portable devices.

The new USB Power Delivery (PD) specification takes it to the next level by delivering power ranging from 60W (3A @ 20V) to 100W (5A @ 20C) over varied cable profiles: from Type C unmarked cables to Type C electronically marked cables. This increase in supply capability adequately supports high-power consumption equipment, significantly reduces battery charging times and frees the system from AC adapters to achieve a more cable-free life.

If you are designing or verifying SoCs which incorporate USB Power Delivery, you really need to have a Verification IP (VIP) for PD that supports the richness of the specification, makes it easy to verify and debug, and supports you to resolve the verification challenges presented by PD support.

You can learn more about Synopsys VC VIP for USB Power Deliver here.

USB-PD-architecture

USB Power Delivery System Architecture

Verifying USB Power Delivery based Designs

Here we discuss some of the capabilities in USB PD, and the verification and debug challenges they present:

1) The USB-PD 2.0 supports Cable Plug Prime and Cable Plug Double Prime configurations apart from the UFP and DFP.

Verification Challenges:

A DFP DUT should be able to communicate with the UFP, Cable Plug Prime and Cable Plug Double Prime with the required type of SOP packet framings and maintain a different protocol stack for each of the device or cable plug.

The DUT, if a UFP or a cable plug, should be able to accept and respond to the messages directed to it.

The Verification IP should be capable of acting as a DFP, UFP or a Cable plug and communicate with the DUT

2) The SOP pattern used in the packet framing is differentiated based on the receiver of the packet. A DFP to UFP communication used SOP pattern, DFP to Cable Plug Prime uses SOP’ pattern and DFP to Cable Plug Double Prime uses SOP”.

Verification Challenge:

The DFP would need configurability for the types of SOP supported.

The VIP should be able to configure the type of SOP supported and respond accordingly. It should ignore the messages not directed to it.

3) Bus Idle and Collision Avoidance: For BMC signaling, if nTransitionCount transitions are not detected within tTransitionWindow, the bus is considered idle. To avoid packet collision on the bus, tInterFrameGap is defined. Inter-frame gap time specifies the minimum time the transmitter has to wait after the transmission of the last bit of a packet before starting the transmission again.

Verification Challenge:

Phy Tx has to take care of the bus idle conditions before starting transmission. Also, transmitter has to follow inter frame gap timing between two consecutive packets.

4) Vendor Defined Messages (VDMs) allow vendors to exchange information that is not defined by the specification and can be used for enabling alternate modes of and for discovery of cable capabilities.

Verification Challenge:

The DUT should be able to send or respond to the VDMs sent by the port partner and, if not supported, should be able to ignore them.

The VIP should be able to send the VDMs to the DFP/UFP or the Cable plug, and should be able to accept the VDMs when operating in the role of a port or a cable plug.

What Synopsys VC Verification IP for USB Power Delivery can do for you

Synopsys VC Verification IP for USB Power Delivery is designed to thoroughly verify USB PD for 1.1 and 2.0 specs along with Type C functionality. The USB PD VIP maps the PD System Architecture (as shown in the figure above) to one agent (single protocol stack) with 3 layers. It also implements both Cable Plug capabilities (SOP’ and SOP’’) for verifying the PD stack. The VIP provides rich testbench and verification features including protocol service, physical service, policy manager service, scale down mode for tiers, callbacks, exceptions and error injection capabilities for easy to code any test scenario both valid and invalid.

Authored by Kavya Udatala, Santosh Moharana and Deepak Nagaria

You can learn more about Synopsys VC VIP for USB Power Deliver here.

Posted in Automotive, Mobile SoC, USB | No Comments »

Celebrating the Holiday Season with VIPs

Posted by VIP Experts on December 23rd, 2015

Happy-Holidays

The Holiday Season is upon us. As you stand in lines, wait for packages to arrive, keep in mind that Synopsys continues to provide you the highest level of service: support, available protocols and deployment of new titles that you, our current and future VIP customer, deserve. It has been a wonderful year — many good memories to cherish, especially with the availability and success of Memory VIP (DRAM and Flash)! We wish you a happy holiday season with family and friends. Thank you for being a VIP this holiday season. Some of the highlights from this year are:

1) At ARM Techcon 2015, we announced the availability of our VC Verification IP for the new ARM AMBA 5 AHB5 interconnect. AHB5 supports the ARMv8-M architecture which drives security into the hardware layer to ensure developers have a fast and efficient way of protecting any embedded or Internet of Things (IoT) device.

2) First Ethernet 400G VIP to enable next generation of networking and communications SoCs, expanding our leadership in 25G/50G Ethernet.

3) Broad library of VIP titles available and successfully in use including USB 3.1, Type-C and Power Delivery, MIPI I3C, and the addition of Ethernet AVB and CAN/CAN-FD to the growing number of Automotive protocols.

4) Test Suite Source Code - that is correct: unencrypted source code is now available for various protocol titles.

5) Webinars and workshops at no charge all year long. We held hands-on VIP workshops on AMBA, Memory and PCIe Gen4 around the world. Stay tuned for more in 2016. If you missed the webinars, you can view several of them now:

We are focused on delivering best in class VIP solutions to you by aligning and collaborating with you on your current and future protocol requirements.

We are glad to have you as a customer and look forward to working with you in 2016. Wishing you have a restful and joyful time.

Cheers and Happy Holidays,
The VIP Experts :)

Note: The Synopsys VIP library is a broad library of interface, bus and memory protocols. Based on our next-generation architecture and implemented in native SystemVerilog, Synopsys VIP offers native performance, native debug with Verdi, enhanced VIP ease of use, configurability, coverage and source code test suites. Synopsys test suites are complete, self-contained and design-proven testbenches. Written natively in SystemVerilog UVM, they are provided as source code to reduce or eliminate the challenge of developing a verification environment and tests for protocol-compliance verification. This enables you to easily customize or extend the environments to include unique application-specific tests or corner-case scenarios. These capabilities substantially increase your productivity for one of the most difficult and time-consuming aspects of SoC design and verification.

Posted in AMBA, Automotive, C-PHY, CAN, CSI, D-PHY, Data Center, DDR, DesignWare, DFI, Display, DSI, eMMC, Ethernet, Ethernet AVB, Flash, HBM, HDCP, HDMI, HMC, I3C, LPDDR, Memory, Memory, Methodology, MIPI, MPHY, NVMe, ONFi, PCIe, SATA, Soundwire, Storage, SystemVerilog, Test Suites, UFS, Unipro, USB | No Comments »

Debugging Memory Protocols with the Verdi Protocol Analyzer

Posted by VIP Experts on December 17th, 2015

Debug continues to be one of the biggest hurdles faced by design and verification engineers. While designing a system that requires close interactions with memories, engineers often rely on print statements or waveform viewers to decipher signal behaviors over time, and/or their relationship relative to other signals over time. While this kind of ad-hoc debugging helps in understanding the behavior of a single signal, it does not work well when debugging protocols.

You can learn more about Synopsys Memory VIP here.

This blog post will introduce the Synopsys Debug Platform aka Verdi to debug memory protocols. Several key questions need to be answered during debug, such as:

  • Am I writing to the correct location?
  • Is my physical to logical address translation correct?
  • What type of operation is taking place at a specific memory location?
  • Is my protocol communication between the controller and the memory correct?

Translating waveforms in a complex protocol to commands is a tedious task. For instance, take a look at the DDR4 truth table for the command WRS4 below:

DDR4-WRS4-truthtable

Here, we must interpret each of the address and control signals to realize that the command is indeed WRS4!

Using the Verdi Protocol Analyzer, we can actually see the transaction WRS4, and view it relative to time quickly. As illustrated in the figure below, Verdi Protocol Analyzer encompasses all the necessary analysis tools for a robust, complete, and efficient protocol level debugging:

Verdi-Protocol-Analyzer-Memory-Debug-Analysis

In subsequent blog posts, I will further discuss how Verdi Protocol Analyzer can be used to debug memory protocols easily and quickly.

Authored by Nasib Naser

You can learn more about Synopsys Memory VIP here.

Posted in DDR, Debug, Memory | Comments Off

PCIe Spread Spectrum Clocking (SSC) for Verification Engineers

Posted by VIP Experts on December 15th, 2015

Many of us who work primarily in digital verification and design are shielded from physical layer details. Only a handful of specialists closely follow these details. So for the rest of us, verifying and debugging Spread Spectrum Clocking (SSC) can be a daunting task.

This blog post is a quick Q&A to give you a jump start in understanding some of the complexities of PCI Express (PCIe) Spread spectrum clocking (SSC) techniques.

Here’s where you can learn more about Synopsys VC Verification IP for Gen 4 ready PCIe and PCIe Test Suites.

What is Spread Spectrum Clocking? Why is it used? 

Spread spectrum clocking is the process by which the system clock is dithered in a controlled manner so as to reduce peak energy content. SSC techniques are used so as to minimize Electromagnetic Interference (EMI) and/or pass Federal Communications Commission (FCC) requirements.

If you transform a clock signal to frequency domain, you will spot a high energy spike at the frequency of clock (nonspread blue spike at 3GHz in Figure 1 below). Spread spectrum is a way to distribute this spike over a band of frequencies to reduce the power at the frequency of signal (Red-colored spread  in Figure 1).

PCIe-SSC-nonspread-spread-clocks

Figure 1: Spectral Amplitude Reduction of 3GHz Clock with Spread Spectrum Clocking

You can learn more about SSC from Synopsys Fellow John Stonick in this short YouTube video.

How is Spread Spectrum Clocking achieved?

Spread spectrum clocking uses modulation to achieve the spread of the spectral power. The carrier signal which is typically a high frequency clock signal is modulated with the low frequency modulator signal for frequency modulation. While the overall energy is unchanged, the peak power is reduced. The amount of peak energy dispersion is dependent on the modulation bandwidth, spreading depth and spreading profile.

The resulting SSC modulated carrier signal ends up with the much higher jitter than the unmodulated carrier signal.

The most common modulation techniques are down-spread and center-spread:

  1. Down-spread: Carrier is modulated to lower than nominal frequency by specified percentage, and not higher
  2. Center-spread: Carrier is modulated both higher and lower than nominal frequency by specified percentage

Figure 2 below shows a 3Ghz carrier clock signal, down-spread with 0.5 % using the 30KHz triangular wave. On the Y-axis, you can see the carrier frequency rise and fall. All spread carrier frequency values remain below the 3 GHz.

PCIe-SSC-Downspread-triangular

Figure 2: 3GHZ Carrier signal frequency variation with 0.5% Down-spread SSC clocking

What are different clocking architectures supported by PCIe? Do all of them support SSC?

There are three different types of clocking architectures supported by PCIe:

  1. Common Reference Clock (Common Refclk)
  2. Data Clocked
  3. Separate Reference Clock (Separate Refclk)

Common Refclk is the most widely supported architecture among commercially available devices. However, the same clock source must be distributed to every PCIe device while keeping the clock-to-clock skew to less than 12 ns between devices. This can be a problem with large circuit boards or when crossing a backplane connector to another circuit board.

If a low-skew configuration isn’t workable, such as in a long cable implementation, the Separate Refclk architecture, with independent clocks at each end, can be used. But Gen 2.0 base specification did not allow the SSC on the Separate Refclk implementation. It was only enabled through ECN: Separate Refclk Independent SSC (SRIS) Architecture in 2013 which became part of the 3.1 base specifications released in Nov 2013.

The Data Clocked Refclk architecture is the simplest, as it requires only one clock source, at the transmitter. The receiver extracts and syncs to the clock embedded in the transmitted data. Data-clocked architecture was introduced when the PCIe 2.0 standard was released in 2007.

You can learn more about clocking architectures here.

To learn more about SRIS, here is yet another insightful short video from Synopsys Fellow John Stonick.

Is SSC supported at all speeds?

Yes. All four speeds 2.5 GT/s (Gen 1), 5 GT/s (Gen 2), 8 GT/s (Gen3) and 16 GT/s (Gen4) can support SSC. Same spread spectrum clocking parameters apply for all the four speeds.

PCISIG-Gen4-Refclk-Param

Figure 3: Snapshot of Refclk Parameters from Gen 4 PCIe Base Specification (Source: PCI-SIG)

Some key parameters we need to note in the above table:

  • FREFCLK: Refclk frequency can have +/-300 PPM variation. For separately clocked architecture, the worst case jitter of 600 PPM will have to be tolerated by the receiver.
  • FSSC: This is frequency of the modulating wave. This is typically triangular.
  • TSS-FREQ-DEVIATION: This indicates PCIe uses Down-spread SSC. This spread is applied to reduce the carried frequency up -0.5%.  This means an additional 5000 PPM of jitter. So total jitter with the separate clocking having spread spectrum enabled would be 5600 PPM.

What is the value provided by verification for spread spectrum clocking?

From a design under test (DUT) point of view, the primary value is in the verification of the receiver’s clock data recovery modeling to handle the large variation in the jitter (up to 5600 ppm) especially in the SRIS mode.

How can I visually verify if SSC is really happening?

There are multiple ways. The simplest is you can visualize the “clock period signal” typically a floating point data type (real type in SystemVerilog) if it’s accessible in your waveform viewer as analog signal.

If it’s not accessible, through a simple monitor collect the time stamp and period of the Refclk or internally generated transmit bit clock running at line speed, for at least 33us assuming you are using 30Khz modulation. Plot the timestamp on the X-axis and duration of the clock on the Y-axis. You should be able to see a profile matching the one shown in Figure 2.

In order for you to successfully verify spread spectrum clocking, the PCIe Verification IP you use needs to support SSC. It should give you the programmability to turn spread spectrum ON or OFF at different speeds. Also it should support the specification defined SSC profile for Down-spread of 0.5%. It should also allow programmability in terms of the frequency of modulated signal in the range of 30Khz (min) and 33Khz (max). Synopsys PCIe VIP comes loaded with all these features and more.

Here’s where you can learn more about Synopsys VC Verification IP for Gen 4 ready PCIe and PCIe Test Suites.

Authored by Narasimha Babu G V L, Udit Kumar and Anand Shirahatti

Posted in Methodology, PCIe | Comments Off

NVMe VIP Architecture: Host Features

Posted by VIP Experts on December 8th, 2015

In my last post, I covered a basic NVMe VIP test-case including some basic setup, sending a command and receiving a completion. Here, we’ll look at a few more NVMe commands, touching on some of the features and capabilities of the VIP.

Here’s where you can learn more about Synopsys VC Verification IP for NVMe and for PCIe.

A (Reminder) View of the VIP

We overviewed this briefly last time. This time we’ll go into a bit more depth, so we will continue to refer to this diagram:

NVMe-VIP-blocks

The NVMe VIP provides a set of feature to assist in testing. These include randomizations, feature snooping, simplified PRP and data buffer handling, memory fencing and built-in score-boarding. We’ll look at each of these in turn with another example.

Continuing our Test Case…

Following up on our “trivial test-case” from the last post (again, we are not showing some of the task arguments or checking errors), let’s take a look at a few more commands to get our NVMe test case rolling.

Just a reminder: the tasks that start with the word Script are NVMe commands. The others (that don’t start with Script), are a VIP status/control/configuration task.

// We will assume that the PCIe stack is setup and running
bit [63:0] base_addr = 32’h0001_0000;  // Ctlr BAR base addr
dword_t    num_q_entries, ctlr_id;

// Tell the host where the controller has its base address
AllocateControllerID(base_addr, ctlr_id, status);
num_q_entries = 2;

// Create the Admin Completion and Submission Queues
ScriptCreateAdminCplQ(ctlr_id, num_q_entries, status);
ScriptCreateAdminSubQ(ctlr_id, num_q_entries, status);

// Send an “Identify Controller” Command
data_buf_t #(dword_t) identify_buffer;      // identify data
identify_buffer = new(1024);
ScriptIdentify(ctlr_id, 0, identify_buffer, 0, status);

We ended our last sample with a call to Identify Controller. Now, continuing at that point, we read bytes 519:516 to get the number of valid namespace IDs. We hand that to the host VIP with the SetNumNamespaces() call. Note that we had to byte-swap the (little-endian) data returned in the Identify Controller buffer.

int num_ns, nsid, blk_size_pow2, blk_size_in_bytes;
bit [63:0] ns_size_in_blks;
feature_identifier_t feature_id;
nvme_feature_t set_feature;

// We’ll grab the Number of valid namespaces (NN) from the
// identify buffer. Note index converted from bytes to dword.
num_ns = ByteSwap32(identify_buffer[516 >> 2]); // bytes 519:516

// Tell the VIP how many active NSIDs the controller has
SetNumNamespaces(ctlr_id, num_ns, status);

Next we read the information for one of the namespaces (Namespace ID=1). Note that we “cheated” a bit here, as we should have walked all the valid namespaces. For the example we’ll just assume we have only NSID=1. Although the Identify calls don’t take a PRP list, their host memory buffer can have an offset. If this is desired, select the argument “use_offset=1”. The actual offset is randomized via the constraints MIN/MAX_PRP_DWORD_OFFSET_VAR.

// Now send an “Identify Namespace” command for nsid=1
nsid = 1;
use_offset = 1;            // Randomize buffer offset
ScriptIdentify(ctlr_id, nsid, identify_buffer,
               use_offset, status);

// Pull information from format[0]
blk_size_pow2 = ByteSwap32(identify_buffer.GetData(128 >> 2)));
blk_size_pow2 = (blk_size_pow2 >> 16) & 32’hff;  // dword[23:16]
blk_size_in_bytes = 1 << blk_size_pow2;          // Convert
ns_size_in_blks = ByteSwap64({identify_buffer.GetData(8 >> 2),
                              identify_buffer.GetData(12 >> 2)});

// Before we create queues, we need to configure the num queues
// on the controller.
feature_id = FEATURE_NUM_QUEUES;
set_feature = new(feature_id);

Once the Identify Namespace returns, we now have both the block size and the namespace size. We set the number of requested queues with Set Features. Via the VIP’s feature snooping, this will (transparently) set the VIP with the current number of supported submission and completion queues (for later checking and error injection support.)

The next steps format our namespace (using Format 0 in the Identify Namespace data structure). We then update the VIP view of the namespace information. The VIP needs this namespace information to keep a per-namespace scoreboard.

set_features.SetNumCplQ(2);     // Request number of sub &
set_features.SetNumSubQ(3);     // cpl queues
// Call Set Features command to set the queues on the ctlr
ScriptSetFeatures(ctlr_id, set_features, …, status);

// Note that Set Features Number of Queues command need not
// return the same amount of queues that were requested. We can
// check by examining set_features.GetNumCplQ() and
// GetNumSubQ(), but in this case we’ll just trust it…
// Format the Namespace
sec_erase = 0;        // Don’t use secure erase
pi_md_settings = 0;   // Don’t use prot info or metadata
format_number = 0;    // From Identify NS data structure
ScriptFormatNVM(ctlr_id, nsid, sec_erase, pi_md_settings,
                format_number, …, status);

// Tell the VIP about this NS
SetNamespaceInfo(ctlr_id, nsid, blk_size_in_bytes,
                 ns_size_in_blks, md_bytes_per_blk,
pi_md_settings, 0, status);

We next create a pair of I/O queues. Since the submission queue requires its companion completion queue to be passed along with it, we create the completion queue first. Note that queue creation routines take an argument contig. If contig is set, the queue will be placed in contiguous memory, otherwise a PRP list will be created for that queue. In addition to creating the actual queue, the VIP creates a fence around the queue to verify memory accesses to the queues. Attempts from the controller to (for example) read from a completion queue will be flagged as an invalid access attempt. The actual queue IDs are randomized (within both legal and user-configurable constraints).

// Create the I/O Queues
num_q_entries = 10;
contig = 1;           // Contiguous queue
ScriptCreateIOCplQ(ctlr_id, num_q_entries,
contig, …, cplq_id, …, status);
contig = 0;           // PRP-based queue
ScriptCreateIOSubQ(ctlr_id, num_q_entries,
contig, cplq_id …, subq_id, …, status);

Once we have I/O queues created, we can start doing I/O. Using the ScriptWrite() and ScriptRead() calls, we send data to the controller and immediately retrieve that same data back. The underlying data structure of the data (in host memory) is built automatically by the VIP. Note the use_offset argument (as with our queue creation tasks) to control whether we generate PRP and PRP List offsets (controlled by MIN/MAX_PRP_DWORD_OFFSET_VAR and MIN/MAX_PRP_LIST_DWORD_OFFSET respectively). Due to our built in score-boarding, we don’t have to compare the data read from that written, the VIP is checking data returned against its shadow copy that is tracking successful VIP writes to the controller.

// Do our I/O write then read with a random LBA/length
data_buf_t #(dword_t) wbuf, rbuf; // Write/Read Data buffers
num_blks = RandBetween(1, ns_size_in_blks);
lba = RandBetween(0, ns_size_in_blks – num_blks);
num_dwords = (blk_size_in_bytes / 4) * num_blks;
wbuf = new(num_dwords);
for (int idx = 0 ; idx < num_dwords ; idx++) // Fill the buffer
   wbuf.SetData(idx, { 16’hdada, idx[15:0] } );
ScriptWrite(ctlr_id, subq_id, lba, nsid, wbuf, num_blks,
            use_offset, …, status);

// We’ll read the same LBA since we know it’s been written
   ScriptRead(ctlr_id, subq_id, lba, nsid, rbuf, num_blks,
              use_offset, …, status);
// Do what you’d like with the rbuf (that’s the data we just read).

We’re Done!

Hopefully that’s gotten us through most of the basics. You should have a good feel for the operation of the VIP. Again, many of these tasks have more arguments allowing more control and error injection, but our goal is to get through without dealing with the more esoteric features. If you have the VIP handy, feel free to walk through the examples: they should look quite familiar.

In my next post, we will look into actually testing a controller, especially going into features like error injection.

As always, thanks for joining us. See you again soon.

Authored by Eric Peterson

Here’s where you can learn more about Synopsys VC Verification IP for NVMe and for PCIe.

Posted in Methodology, NVMe, PCIe, UVM | Comments Off

MIPI I3C VIP Accelerates Scalable Sensor Interfaces on Mobile Devices

Posted by VIP Experts on December 3rd, 2015

As sensors continue to get smaller, more powerful and cheaper, smartphones and other mobile devices incorporate over ten sensors to create self-aware devices. For instance, most recent models of Apple and Samsung handheld devices use several sensors to perform some of their coolest interface tricks: proximity sensor, accelerometer (motion sensor), ambient light sensor, moisture sensor, gyroscope, thermometer and magnetometer (compass). These sensors enable key capabilities for users including location services, health apps, fingerprint scanning and sophisticated gaming while optimizing power usage and WiFi access.  

The proliferation of sensors has created significant design and verification challenges as mobile devices often require as many as 10 sensors and more than 20 signals. As these requirements continue to grow, designers and verification engineers face mounting challenges to deliver the design, cost and performance efficiencies manufacturers need to expand product capabilities with more sensors.

MIPI I3C, being developed by the Sensor MIPI Working Group, incorporates and unifies key attributes of currently existing interfaces, I2C and SPI, while improving the capabilities and performance of each approach with a comprehensive, scalable interface and architecture. This will reduce interface fragmentation thus reducing development and integration costs and fostering innovation opportunities.

Synopsys VC Verification IP for MIPI I3C provides a comprehensive set of protocol, methodology, verification and ease-of-use features. Our current users indicate to us that they are able to achieve accelerated verification closure: native SystemVerilog and built-support for UVM enables ease-of-use, ease-of integration and high performance for them.

template testCS5

You can learn more about Synopsys VC Verification IP for MIPI I3C and download the datasheet here.

Posted in Debug, I3C, MIPI, Mobile SoC, SystemVerilog, UVM | Comments Off

Keeping Pace with Memory Technology using Advanced Verification

Posted by VIP Experts on December 1st, 2015

My latest webinar, Keeping Pace with Memory Technology using Advanced Verification, begins by taking the audience back in time. To a time when memories had low density, slow performance, and required expensive silicon real estate. Then I fast forward back to the future when memory technologies have evolved to support huge densities, blazing fast speeds while keeping power consumption low, and all this within very small geometry.

mem-tech-evolution

I give an overview of various memory technologies, and the market segments they support. To keep pace with this memory technology evolution, design and verification tools, and methodologies to create and productize these technologies, have also evolved.

This memory evolution has created a whole new set of challenges for verification engineers. To be successful, verification must evolve to be thorough, efficient, and effective earlier in the design cycle. Also, it must rely on native support for SystemVerilog utilizing state of the art verification techniques for Testbench creation and customization, and Debug and Coverage closure.

Memory-VIP-Architecture

A faster, proven and state-of-the-art front-end Memory verification infrastructure is required to tackle the complexities presented by the latest Memory innovations. Leveraging industry standard Universal Verification Methodology (UVM), enabling advanced protocol level debugging, and accelerated verification closure with built-in coverage and plans, verification engineers can confidently validate memory technology in a short period of time.

In my webinar, I review the evolution of memory technology leading to the latest developments in DDR, LPDDR, eMMC, High Bandwidth Memory (HBM) and Hybrid Memory Cube (HMC). I highlight key concerns in the verification of these protocols, and their contributions to overall System Validation complexity. Finally, I review successful methodologies and techniques that assure project success.

Register here for my webinar at no charge to learn all about it: Keeping Pace with Memory Technology using Advanced Verification. You can find more information on Synopsys Memory VIP here.

Authored by Nasib Naser

Posted in DDR, DFI, Flash, HBM, HMC, LPDDR, Memory | Comments Off

First Ethernet 400G VIP to Enable Next-Gen Networking and Communications SoCs

Posted by VIP Experts on November 24th, 2015

On Monday, Synopsys announced the availability of the industry’s first verification IP (VIP) and source code test suite to support the proposed IEEE P802.3bs/D1.0 Ethernet 400G standard (400GbE). To understand how it will enable next generation networking and communication systems, we take a look at the evolution of the Ethernet.

Evolution of the Ethernet

Ethernet was first commercially introduced in 1980. It was originally used to connect personal computers to other computers and shared resources like printers in an office.  Ethernet continued to grow to cover campuses and data centers, and then to connect these over metropolitan area networks (MAN) and wide area networks (WAN). This evolution of connectivity followed the 10X speed jumps from the 1980s to 2010 (10M, 100M, 1G, 10G, 40G and 100G) until we reached 100GbE. When the industry saw the challenges of making 100GbE affordable, the industry developed 40GbE as an interim, lower-cost step. 40GbE opened the door for non-10X steps in speed, including 25G, 50G and 400G.

Birthing a new generation of Ethernet was quite straightforward in the early years: enterprises wanted faster LANs. Vendors figured out ways to achieve that throughput. IT shops bought the speed boost with their next computers and switches. It is a lot more complicated now with carriers, Web 2.0 giants, cloud providers and enterprises all looking for different speeds and interfaces. Facebook, for instance, said in 2010 that it already had a need for Terabit Ethernet in its data centers. With billions of Ethernet devices in use on networks around the world, it is harder to define a specification that satisfies everyone.

Modern networks are built on a hierarchy of devices, with “deeper” switches or routers networking the edge devices together for full connectivity. This has encouraged the use of successively faster trunks as port speeds have increased. More traffic implies more capacity. But traffic does not impact the WAN or LAN uniformly—and therefore the needs may be vastly different in different types of networks.

Cloud computing encourages the creation of dense, highly connected data centers. Cloud applications often have more components, and are horizontally integrated than traditional applications, which makes traditional multi-tiered LAN switching performance more problematic. In a cloud data center, even 10 GbE server/storage interfaces connected in a four- or five-layer structure might drive switch interface speeds to 400G or more in the deeper layers. When clouds are created by linking multiple data centers over fiber, a super-Ethernet connection is almost inevitable. This is where we need faster Ethernet switches: to connect cloud data centers and support optical metro aggregation and OTN-based cloud core networks.

“The 400GbE development effort, started in the IEEE 802.3 Ethernet Working Group back in March 2013, remains driven by the need to provide higher speed solutions for core networking applications that depend on the aggregation of data.” said John D’Ambrosia, earlier this year in Enterprise Networking Planet. John is the Chairman of the Ethernet Alliance and Chief Ethernet Evangelist in the CTO office at Dell. In November 2013, the IEEE’s 400 Gb/s Ethernet Study Group” approved project objectives for four different link distances of 400GbE. These were approved by IEEE 802.3 Working Group in March 2014.

Last week, Facebook announced it is testing a 100 Gbit/second top-of-rack Ethernet switch for its next-generation data centers. Networking hardware vendors, like Cisco, Arista, and Mellanox, already offer 100 GbE switches. 

Enabling Next-Gen Networking and Communication SoCs

As the need for increased bandwidth to support video-on-demand, social networking and cloud services continues to rise, Synopsys VC VIP for Ethernet 400G enables system-on-chip (SoC) teams to design next-generation networking chips for data centers with ease of use and integration.

VC-verification-ip-ethernet-diagram-550

Synopsys VC VIP for Ethernet uses a native SystemVerilog Universal Verification Methodology (UVM) architecture, protocol-aware debug and source code test suites. Synopsys VC VIP is capable of switching speed configurations dynamically at run time, and includes an extensive and customizable set of frame generation and error injection capabilities. In addition, source code UNH-IOL test suites are also available for key Ethernet features and clauses, allowing teams to quickly jumpstart their own custom testing and speed up verification time.

Synopsys thus provides a comprehensive Ethernet solution for all speeds, including 25G, 40G, 50G, 100G and the newest 400G standards.

You can learn more about Synopsys VC VIP for Ethernet and  source code UNH-IOL test suites here.

Posted in Data Center, Ethernet, Methodology, SystemVerilog, Test Suites, UVM | Comments Off

Accelerate your MIPI CSI-2 Verification with a Divide and Conquer Approach

Posted by VIP Experts on November 19th, 2015

MIPI Alliance’s CSI-2 (Camera Serial Interface) has achieved widespread adoption in the smartphone industry for its ease-of-use and ability to support a broad range of imaging solutions. MIPI CSI-2 v1.3, which was announced in February 2015, also offers users the opportunity to operate CSI-2 on either of two physical layer specifications: MIPI D-PHY, which CSI-2 has used traditionally, as well as MIPI C-PHY, a new PHY that MIPI first released in September 2014. Products may implement CSI-2 solutions using either or both PHYs in the same design. MIPI CSI-2 v1.3 with C-PHY provides performance gains and increased bandwidth delivery for realizing higher resolution, better color depth, and higher frame rates on image sensors while providing pin compatibility with MIPI D-PHY.

MIPI CSI-2 poses unique verification and debugging challenges: multiple images formats, several different image resolutions, multiple virtual channels, different types of long packets and short packets, error injection scenarios, ultra-low power mode, and support for MIPI C-PHY and D-PHY. Since MIPI CSI-2 is considered a mature technology – it has been around for a decade – it also demands a short time to market cycle. So how should you as a developer meet the challenges of increasing complexity along with shortening schedules?

Your verification schedule can be significantly cut down when you use Synopsys’ built-in MIPI CSI-2 test suites, monitors and coverage models along with our CSI-2 VIP. Test sequences and scoreboard are customizable. Coupled with the protocol analyzer, it further enables you to cut down the debug cycles, which is another big bottleneck in achieving functional closure.

CS4560_Fig1

You can learn more about how Synopsys’ MIPI CSI-2 customizable test suite with the coverage model can accelerate your CSI-2 verification by downloading one of our customer case studies here. This article describes a divide-and-conquer approach that enabled them to verify the MIPI PHY and the MIPI MAC separately. They also discuss how the scoreboard and IDI monitor provided good compatibility to work with their design’s custom interface on the application side. Also, the highly configurable architecture of the VIP and test suites will enable them to reuse their entire testbench for future updates of the design as well as updates to MIPI specifications.

Here’s where you can learn more about Synopsys’ VC Verification IP for MIPI CSI-2 and CSI-2 Test Suites, and the customer case study for using a divide-and-conquer approach.

Authored by Anand Shirahatti

Posted in C-PHY, Camera, CSI, D-PHY, MIPI | Comments Off