BLOGS & FORUMS
A global team of protocol experts that share their insights and technical expertise in the areas of AMBA, DDR, Ethernet, LPDDR, MIPI, PCIe, SAS, SATA, USB and UFS. This comprehensive team participates in standards committees and will provide the latest information and updates as it relates to your future design considerations.
Posted by VIP Experts on October 13th, 2015
MIPI UniPro is a recent addition to mobile chip-to-chip interconnect technology. It’s got many useful features to meet the requirements of mobile applications. That’s perhaps why Google’s Project Ara has selected MIPI UniPro and MIPI M-PHY as its backbone interconnects.
In this blog post, we describe three differentiating features, benefits and their verification challenges. All the discussion is referenced to MIPI UniPro 1.6.
- Achieving Low power consumption through Power mode changes and hibernation
- Flexibility in chip-to-chip lane routing through Physical Lane mapping
- Enhanced QoS through CPort arbitration & Data link layer pre-emption
You can learn more about our VC VIP for Unipro and M-PHY here.
1. Achieving Low power consumption through Power mode changes and hibernation
MIPI UniPro provides six power modes to meet different needs. In SLOW mode, it supports seven gears with operational speed ranging from 3Mbps to 576Mbps per lane. In FAST mode, it supports three gears with operational speed ranging from 1.5Gbps to 6Gbps per lane. Both SLOW and FAST can be coupled with automatic M-PHY’s BURST closure during traffic gaps called AUTO. In the complete absence of traffic hibernate mode is used. All unconnected lanes shall be put in OFF mode. UniPro allows independent power mode settings for both transmit and receive direction.
UniPro allows the dynamic selection of number of lanes, gear and power mode per direction using Power mode change request (DME_POWERMODE) and hibernate state transitions through (DME_HIBERNATE_ENTER and DME_HIBERNATE_EXIT) primitives. MIPI UniPro L1.5 Layer accomplishes these requests through the PHY Adapter Configuration Protocol (PACP) frames of the type PACP_Pwr_Req and PACP_PWR_Cnf. Traffic is paused briefly during the power mode change procedure. Power mode settings are applied simultaneously after completion of power mode change procedure on both ends and traffic is resumed.
This feature allows MIPI UniPro in achieving optimal “performance per watt” of power through setting of appropriate power mode. Based on the application’s data traffic bandwidth and latency requirements, it can scale the number of lanes and operational speed of lanes in each direction dynamically.
Following parameters gives rise to large state space
- 6 different power modes
- 7 gears in SLOW mode and 3 gears in the FAST mode
- Up to 4 lanes and it can be scaled down to any value
- Asymmetric setting of the mode, gear and lanes in both direction
Functional verification will have to cover the unique combination of all of the above power mode state space (mode x lane x gear). Additionally two more important transition combinations have to be covered:
- Transitions from one possible unique combination of power mode to another possible unique combination (~1600 combinations)
- Hibernate entry and exit from each of the unique power mode state
This would require a constrained random stimulus support. The constrained random stimulus generation is not quite straight forward. It will have to take in to consideration:
- Current power mode state
- Capabilities of the both the peer and local device
Based on above parameters the legal power mode changes will have to be initiated from both the VIP and DUT side.
2. Flexibility in chip-to-chip lane routing through Physical Lane mapping
UniPro allows using multiple lanes (max up to 4) to scale the bandwidth. UniPro Phy adapter layer takes care of distribution and merging of data. During the L1.5 layer’s multi-phase initialization sequence the total number of lanes connected and their physical to logical lane mapping gets determined.
Training sequence identifying the logical and physical lane mapping. Source: MIPI
This feature provides the flexibility in the UniPro’s chip-to-chip lanes routing. Considering the small footprint requirement for mobile hardware this will surely ease printed circuit board designer’s life.
From verification point of view need to cover the following:
- Different number of lanes connected, and
- Every physical lane getting mapped to every possible logical lane
Typically through configuration, the number of lanes connected and for connected lanes the logical to physical mapping used needs be randomized. Based on this configuration, the VIP will drive the specified number of lanes and advertise appropriately to the DUT.
3. Enhanced QoS through CPort arbitration & Data link layer pre-emption
MIPI UniPro supports two traffic classes traffic class 0 (TC0) and traffic class 1 (TC1). Traffic class 0 support is mandatory while traffic class 1 support is optional. Priority based arbitration between traffic classes is supported. The MIPI UniPro stack, right from its transport Layer L4 to the data link layer L2, is traffic class aware to provide enhanced Quality of Service (QoS).
At the transport layer level, the logical data connection is the connection oriented port (CPort). It is mapped to either TC0 or TC1. Cports mapped to higher priority traffic class will have precedence over CPorts mapped to lower-priority traffic class. Within a traffic class, segment level round robin is the default arbitration scheme.
To reduce delays and to improve Quality of service (QoS) at the data link layer level, it can insert high priority frames within a lower priority data frame under transmission. This feature is called pre-emption. It’s an optional feature. This concept is extended to other control frames as well for improving the latency and reducing the bandwidth wastage during retransmission.
Composition with pre-emption (Traffic class Y > X). Source: MIPI
CPort arbitration and pre-emption provide fine control over latency of communication. This enables improved QoS. This feature can be used for latency sensitive traffic.
From the verification point of view, we need to address:
- Meeting the overall intent of QoS feature
- Ensuring that the pre-emption feature is functionally implemented correctly
QoS feature intent can be verified by measuring the latency on both transmit and receive path of the DUT. This can be done as additional functionality of the scoreboard. The scoreboard can record the time stamp of the messages entering and exiting ports of DUT on both CPort and serial line. The latency of transmit and receive path of DUT can be checked against the desired configured value. Any violations can be flagged as warnings or errors based on the percentage violations.
To ensure that the pre-emption feature is functional, both the legal and illegal pre-emption cases needs to be exercised. Based on the supported priorities table for DL arbitration scheme, there are 18 illegal and 35 legal pre-emption scenarios possible. Both legal and illegal cases must be covered on both transmit and receive path of the DUT including multi-level pre-emptions.
For all these features verification, a well-architected Verification IP plays a critical role. Verification IP with right level of flexibility and control can significantly accelerate the closure of verification.
Authored by Anand Shirahatti, Divyang Mali, Naveen G
You can learn more about our VC VIP for Unipro and M-PHY here.
Posted in Methodology, MIPI, Mobile SoC, MPHY, Unipro | No Comments »
Posted by VIP Experts on October 6th, 2015
Internet of Things (IoT) is connecting billions of intelligent “things” to our fingertips. The ability to sense countless amounts of information that communicates to the cloud is driving innovation into IoT applications. Servers powering the cloud will have to scale to handle these billions of intelligent things. As a preparation to that PCIe Gen 4 has been introduced. It is capable of supporting 16 T transfers/s. Current primary market driver for the PCIe Gen4 application seems to be server storage space.
Also PCI-SIG and MIPI are collaborating on supporting MIPI MPHY with PCIe: MPCIe is a version of the PCIe protocol for the mobile interconnect.
PCIe had its own evolution with Gen1, Gen2, Gen3 and now Gen4. With every new generation, the speed has doubled and so is the increase in complexity. A proven PCIe Verification IP with support for all the speeds can significantly reduce the verification schedule. If such a Verification IP is also bundled with test suite and coverage suite it can certainly reduce the risk of verification. What if such Verification IP also comes bundled with support for protocol aware debug in Verdi?
Synopsys offers all these features in a single PCIe VC Verification VIP offering:
- Support for Gen1, Gen2, Gen3 and Gen 4 speeds
- Support for MPCIe
- Supporting for the NVMe application
- Includes a bundled test suite
- Built-in support for protocol-aware debug in Verdi
Come experience our product hands-on through a PCIe Workshop in your region. This workshop will provide you a unique opportunity to learn about:
- Ease of programming interface of VC Verification IP for doing normal transfers, error injection and low power scenarios
- Various outputs generated by the VC Verification IP for debug, Learn to use different abstraction of debug information for different level of debug from signals to text logs to protocol aware debugs within single Verification IP
- How to integrate the user DUT in to the test suite environment and get it going quickly
Recently, PCIe workshops were held in Mountain View, California, and Bangalore, India. Participants in these workshops told us that they loved the new feature of Verdi to facilitate protocol aware debug and coverage back annotations. Error injection capabilities coupled with various debug capabilities at each layer gave them the confidence to left-shift verification closure.
Free registration is now open for Shanghai-China, Tokyo-Japan, Austin-Texas and Herzelia-Israel.
Authored by Sadiya Ahmed, Anunay Bajaj and Anand Shirahatti
Posted in Debug, Methodology, MIPI, MPHY, PCIe, SystemVerilog | No Comments »
Posted by VIP Experts on September 29th, 2015
Here, Bernie DeLay, group director for Verification IP R&D at Synopsys, talks to Ed Sperling of Semiconductor Engineering about the challenges of debugging protocols in complex SoCs.
You can learn more about our VIPs at Verification IP Overview.
Posted in AMBA, DDR, Debug, Memory, Methodology, PCIe, Processor Subsystems, Storage, USB | No Comments »
Posted by VIP Experts on September 22nd, 2015
The MIPI Unified Protocol (UniPro) specification defines a layered protocol for interconnecting devices and components within mobile device systems. It is applicable to a wide range of component types including application processors, co-processors, and modems. MIPI UniPro powers the JEDEC UFS, MIPI DSI2 and MIPI CSI3 applications. As of now, MIPI UniPro has been adopted the most in the mobile storage segment through JEDEC UFS. Adoption of MIPI UniPro and MIPI M-PHY provides lower power and higher performance solutions.
You can learn more about our VC VIP for Unipro, M-PHY, PCIe and JEDEC UFS here.
Many PCIe veterans may already have begun implementing MIPI UniPro in their designs. This blog post takes you through a quick view of the MIPI UniPro and MIPI M-PHY stack from a PCIe perspective. As you will notice, there are many similarities.
PCI Express provides switch based point-to-point connection for connecting chips. MIPI UniPro also does the same. The current UniPro 1.6 specification does not support the switch though. It’s planned for future revisions. Toshiba has already released detailed technical documentation of UniPro bridge and switch for supporting the Google Ara. Both are packet switched high-speed serial protocols.
PCI Express maintains backward compatibility and uses the load and store model of the PC world. Configuration, Memory, IO and message address space access are supported at the transaction level. MIPI UniPro, on the other hand, is brand new and hence, does not have to carry the burden of ensuring backward compatibility. UniPro provides a raw communication of data in the form of messages. There is no structure to messages.
Both PCI express and UniPro support the concept of multiple logical data streams at the transport level. PCI express supports the concept of Traffic class (TCs) and Virtual Channels (VCs). A maximum of 8 TCs are supported. TCs can be mapped to different Virtual Channels (VCs). This concept of TCs and VCs is targeted towards providing deterministic bandwidth and latency.
UniPro also has a similar concept. PCI Express Traffic class (TC) is equivalent to a MIPI UniPro Cport and the PCI Express Virtual Channel(VC) is equivalent to a UniPro Traffic Class(TC). (Yes you noticed the use of TC terminology right. Both use same term but meanings are different) CPorts are bidirectional logical channels. Each CPort has a set of properties, which characterize the service provided. Multiple CPorts can be mapped to a single TC. UniPro 1.6 supports two TCs – TC0 and TC1. TC1 has higher priority than TC0. Additionally the UniPro provides End-to-End (E2E) Flow control at transport level. Note that UniPro E2E flow control is meant for application level buffer management and not for the transport layer buffers. While PCI Express implements the flow control at transport level as well.
PCI Express transport layer implements the end to end CRC (ECRC) and data poisoning. PCI Express has higher sensitivity to error detection at transport layer than the MIPI UniPro 1.6. This along with Advanced error reporting (AER) is what differentiates the PCI express to be used in the Server space where high reliability, accessibility and serviceability are valued.
PCI Express does not have a separate network layer. MIPI UniPro has a very simple pass through network layer in Version 1.6.
Data Link Layer
PCI Express data link layer is almost similar to that of the MIPI UniPro. Both solve the same problem of providing a means of robust link communication to the next immediate hop. They use similar error recovery mechanisms such as CRC protection, retransmission using NAC acknowledgement, multiple outstanding unacknowledged packets tracked using sequence numbers and flow control using the credits. While PCI Express credits are managed for Posted, Non posted and completion headers along with data, the UniPro flow control is per Traffic class(TC). In UniPro, both the credits and sequence numbers are independently managed per traffic class.
MIPI UniPro additionally supports the concept of pre-emption where in a high priority frame can pre-empt a low priority frame. This enables UniPro to provide even higher levels of latency determinism than PCI Express.
PCI Express data link layer supports low-level power management that is controlled by the hardware. The link power states are L0, L0s, L1, L2 and L3. L0 is full-on state and L3 is Link-off state. UniPro pushes it to the Physical adapter layer.
PCI Express uses differential signaling and embedded clock. It can support multiple lanes up to 32 lanes. Reset and Initialization mechanisms enable determination of Link Speed, Link width and Lane mapping. Uses the 8B10B and 128/130 bit encoding, scrambling and deskew patterns aiding the clock recovery.
UniPro has gone one step ahead and partitioned the functionality further here. The UniPro Physical layer is divided into two sub layers Physical Adapter layer (L1.5) and Physical layer (L1). MIPI has multiple physical layer specifications. They are D-PHY and the more recent M-PHY. UniPro 1.6 will only be used with the M-PHY, though. The real role of L1.5 is to abstract the higher layer from the physical layer technology. UniPro is designed to use up to 4 M-PHY lanes. Reset and initialization mechanisms are used to determine the capabilities, similar to PCI Express.
MIPI UniPro supports higher power optimization. It is done by dividing Speed into two categories: High speed and Low speed. These are further sub-divided in to Gears. Dynamically, the speed and number of lanes can be scaled based on the bandwidth requirements. This process of changing the speed is called a power mode change. The application software initiates power mode changes. When the link is not under use it can be put in to hibernate state resulting in the highest power savings. The Physical adapter layer can also autonomously save power by ending the active data burst and entering sleep or stall states.
M-PHY also uses differential signaling and embedded clock in one of its modes of operation. 8B10B, 128/130 bit encoding, scrambling and deskew patterns are used to aid the clock recovery in high speed mode of operation.
We were awed by the similarities. No wonder there are initiatives like Mobile PCI Express(M-PCIe) — allowing the PCI Express to operate over M-PHY makes sense.
Similar comparisons can be made between MIPI UniPro and the Super Speed USB 3.0. Hence we are beginning to see initiatives being taken to enable the Super speed Inter-chip (SSIC ) with M-PHY.
It will be interesting to see how these will evolve, and which one of these will emerge victorious. While we wait for the UniPro vs. M-PCIE battle to settle down, one thing is clear: M-PHY has proved itself as a clear winner.
Authored by Anand Shirahatti
You can learn more about our VC VIP for Unipro, M-PHY, PCIe and JEDEC UFS here.
Posted in Data Center, Interface Subsystems, Methodology, MIPI, Mobile SoC, MPHY, PCIe, Unipro | No Comments »
Posted by VIP Experts on September 17th, 2015
NVM Express or the Non-Volatile Memory Host Controller Interface (its prior name was NVMHCI, now shortened to NVMe) is a host-based software interface designed to communicate with Solid State storage devices across a PCIe fabric. The current Synopsys NVMe Verification IP (VIP) is a comprehensive testing vehicle which consists of two main subsystems – the first is the SVC (System Verification Component), the second is SVT (System Verification Technology). The SVC layers are associated with the actual NVMe (and PCIe, etc.) protocol layers. The SVT provides a verification methodology interface to UVM and other methodologies such as VMM and OVM.
Here’s where you can learn more about Synopys’ VC Verification IP for NVMe and for PCIe and M-PHY.
Although the VIP supports multiple versions of VIP, we will initially be version agnostic, speaking more in generalities of the protocol in order to provide a 10,000’ view of the protocol and its support in the VIP. Future discussions will delve deeper into particular details of NVMe and features of the Verification IP.
You can learn more about Synopys’ VC Verification IP for NVMe here.
A Brief Glance at NVMe
Unlike PCIe – where the root and endpoint are essentially equals – NVMe’s asymmetric relationship is closer to that of other storage protocols (e.g. SATA, Fibre Channel)
An NVMe command (e.g. Identify, Read, Write) is initiated at the Host and converted to an NVMe request, which is then appended to a particular submission queue that lives in host memory. Once the command is inserted into a queue, the host writes to a per-queue doorbell register on the Controller (controllers live on PCIe endpoints.) This doorbell write wakes up the controller, which then probes the queue for the new request(s). It reads the queue entry, executes the command (potentially reading data buffers from the Host memory) and finally appends a completion into a completion queue then notifies the host of this via an interrupt. The host wakes up, pops that completion off the queue and returns results to the user.
There are two main types of queues that are used:
- Admin Queues – these are used for configuring and managing various aspects of the controller. There is only one pair of Admin queues per controller.
- I/O Queues – these are used to move NVMe protocol specific commands (e.g. Read, Write). There can be up to 64K I/O queues per controller.
The queues have both tail (producer) and head (consumer) pointers for each queue. The tail pointer points to the next available entry to add an entry to. After the producer adds an entry to a queue, he increments the tail pointer (taking into consideration that once it gets to the end of the queue, it will wrap back to zero – they are all circular queues.) The queue is considered empty if the head and tail pointers are equal.
The consumer uses her head pointer to determine where to start reading from the queue; after examining the tail pointer and determining that the queue is non-empty; she will increment the head pointer after reading the each entry.
The submission queue’s tail pointer is managed by the host; after one or more entries have been pushed into the queue, the tail pointer (that was incremented) is written to the controller via a submission queue doorbell register. The controller maintains the head pointer and begins to read the queue once notified of the tail pointer update. It can continue to read the queue until empty. As it consumes entries, the head pointer is updated, and sent back to the host via completion queue entries (see below).
Similarly, the completion queue’s tail is managed by the controller, but unlike the host, the controller only maintains a private copy of the tail pointer. The only indication that there is a new completion queue entry is a bit in the completion queue entry that can be polled. Once the host determines an entry is available, it will read that entry and update the head pointer. The controller is notified of head pointer updates by host writes to the completion queue doorbell register.
Note that all work done by an NVMe controller is either pulled into or pushed out of that controller by the controller itself. The host merely places work into host memory and rings the doorbell (“you’ve got a submission entry to handle”). Later it collects results from the completion queue, again, ringing the doorbell (“I’m done with these completion entries”). So the controller is free to work in parallel with the host; for example, there is no requirement for ordering of completions – the controller can order it’s work anyway it feels like.
So what are these queue entries that we’re moving back and forth between host and controller?
The first is the Submission Queue Entry, a 64-byte data structure that the host uses to transmit command requests to the controller:
|Command Dwords 15-10 (CDW15-10): 6 dwords of command-specific information.
|PRP Entry 2 (PRP2): Pointer to the PRP entry or buffer or (in conjunction with PRP1) the SGL Segment.
|PRP Entry 1 (PRP1): Pointer to the PRP entry, or buffer or (in conjuction with PRP2) the SGL Segment.
|Metadata Pointer (MPTR): This field contains the address of an SGL Segment or a contiguous buffer containing metatdata.
|Namespace Identifier (NSID): This field specifies the namespace ID that this command applies to.
|Command Dword 0 (CDW0): This field is common to all commands and contains the Command Opcode (OPC), Command Identifier (CID), and various control bits.
One submission queue entry per command is enqueued to the appropriate Admin or I/O queue. The Opcode specifies the particular command to execute and the Command Identifier is a unique identifier for a command (when combined with the Submission Queue ID).
In addition to using queue entries to move information back and forth, the host can also allocate data buffers in host memory. These buffers can either be contiguous (defined by their base address and length) or a set of data buffers spread about the memory. The latter use data structures called PRP lists and Scatter-gather lists (SGL) to define their locations. When the host needs to move these buffers to/from the controller (e.g. for a read or write command), it will allocate the appropriate data structure in host memory and write information regarding those data structures for those buffers into the above PRP1 and PRP2 fields prior to writing the queue entry to that controller.
Metadata (e.g. end-to-end data protection) can also be passed along with the NVMe commands, in two ways. It can be sent either in-band with the data (i.e. it is contiguous with the data, per sector), or out-of-band (i.e. it is sent as a separate data stream). In SCSI parlance these are known as Data Integrity Field (DIF) and Data Integrity Extension (DIX), respectively. The latter of these uses the Metadata Pointer described above. We’ll discuss this in detail in future episodes.
When we are actually writng/reading to/from the Non-Volatile storage on the controller, we write to namespaces. In other storage technologies, there are other analogous containers – for example LUNs in SCSI. Namespaces can be unique to a controller, or shared across multiple controllers. Regardless, the namespace ID field in the request determines which namespace is getting accessed. Some commands don’t use the namespace field (which is then set to 0), others may need to deal with all the namespaces (the namespace ID is then set to 0xffff_ffff).
On the completion side, there is an analogous data structure, the Completion Queue Entry:
|Command Specific Information: One dword of returned information. (Not always used.)
|Submission Queue ID: The submission queue in which the associated command was sent on. (16-bits)
Posted in MPHY, NVMe, PCIe, Storage, SystemVerilog | Comments Off
Posted by VIP Experts on September 10th, 2015
In this video, Synopsys Applications Consultant, Vijay Akkaraju, describes the evolving Storage ecosystem, the challenges of verifying storage protocol based system, and how Synopsys’ SATA Verification IP can support you in verifying and debugging your designs efficiently and effectively.
You can learn more about VC Verification IP for SATA here.
Posted in Debug, SATA, Storage | No Comments »
Posted by VIP Experts on September 3rd, 2015
In the blog Seamless Fast Initialization for DDR VIP Models, we discussed how important it is for Memory VIP simulations to have the option of going through the process of Reset and Initialization fast, and get to the IDLE state and start reading and writing to memory location. We presented one way to achieve this by scaling down the timings required while going thru all the JEDEC standard steps required for Reset and Initialization.
In this blog, we will discuss how Synopsys Memory VIP allows skipping initialization altogether while maintaining the proper behavior of the model.
You can learn more about Synopsys Memory VIP here.
Using the Synopsys Memory VIP’s Skip Initialization feature ensures that the model will be in an IDLE state, bypassing the requirements for the reset process. In that state, the VIP is ready to accept commands like REF, MRS, and ACT. The allowed commands are illustrated below in Figure 1 – The DDR3 SDRAM JEDEC standard JESD79-3F State Diagram, and Figure 2 – The DDR4 SDRAM JEDEC Standard JESD79-4 State Diagram.
Figure 1 – The DDR3 SDRAM JEDEC standard JESD79-3F State Diagram
Figure 2 – The DDR4 SDRAM JEDEC Standard JESD79-4 State Diagram
The Skip Initialization feature is applicable for DDR3, DDR4, and LPDDR. It should be noted that a reset after backdoor settings using skip init will wipe out all the settings and set back to default.
For Discrete devices, we can use the following to set the VIP to skip initialization mode:
// dram_cfg is handle of class svt_ddr_confitugation
dram_cfg.skip_init = 1
For DIMM devices, we can use the following steps to set the VIP to skip the initialization sequence on a DIMM Model:
// dimm_cfg is handle of svt_ddr_dimm_configuration and
// configuring the skip_init setting for individual DRAM
// configurations with DIMM structure
dimm_cfg.data_lane_cfg[i].rank_cfg[j].skip_init = 1;
// Skip initialization setting for RCD component within an
// RDIMM and LRDIMM
dimm_cfg.ca_buffer_cfg.skip_init = 1;
Skip initialization setting for Discrete as well as DIMM models should be done in the build phase before passing the configuration object through the config_db mechanism.
Also, these settings can be done after build phase but user will have to call reconfigure() method to update the settings in the model. This has to be done prior to any command on the interface.
The following is the syntax for reconfigure() method call:
// For Discrete Device Model
// For DIMM Model
In subsequent blogs, we will discuss how Mode Registers can be set using Frontdoor and Backdoor accesses. So do come back and check it out.
Authored by Nasib Naser
You can learn more about Synopsys Memory VIP here.
Posted in DDR, LPDDR, Memory, Mobile SoC, Processor Subsystems | No Comments »
Posted by VIP Experts on August 27th, 2015
Here, we describe how easy it is to integrate and validate a SoundWire design using Synopsys SoundWire VIP Test Suite.
Often Verification IP and design integration require in-depth understanding of the protocol and methodology. This requires significant investment of time in building the expertise in-house. To accelerate the process, Synopsys’ Soundwire VIP solution is written in 100% native SystemVerilog to enable ease-of-use, ease-of integration and high performance. In addition, we provide test suites that are complete, self-contained and design-proven testbenches, written in SystemVerilog UVM and targeted at protocol compliance testing. These are provided as source code enabling users to easily customize or extend the environments to include unique application-specific tests or corner-case scenarios. Using Synopsys VIPs and Test Suites, our users have reduced verification time from months to a few hours in some cases.
Soundwire Test Suite Architecture
Verification IP and RTL Design integration is one of the areas where a good test suite architecture helps the most. It is easy to plug-in the design with verification IP if test suite environment is designed keeping various design configurations in mind. The figure below illustrates our Test Suite architecture.
The intention of this architecture is to make the environment design independent so that it can work with any design with little or no effort. In this figure, dark and light purple color blocks are provided with Test Suite, user intervention is only required in light purple color boxes to customize the environment to specific DUT. These changes are one time changes and all existing tests and sequences should run as it is after the changes. This significantly reduces verification time of any design, while giving the user a lot of flexibility to write their own test and sequences for design specific test scenarios.
In this white paper, you can learn how a customer was able to integrate the SoundWire Test Suite to verify SoundWire Slave IP of a third party vendor and begin test runs within 8 hours. To learn more about the basics of digital audio transmission and MIPI Soundwire, you can download this whitepaper.
Posted in Audio, Interface Subsystems, MIPI, Mobile SoC, Soundwire | Comments Off
Posted by VIP Experts on August 20th, 2015
DDR verification is one of the most critical and complex tasks in any SoC as it involves a controller sitting inside the DUT and an external DDR memory sitting outside the DUT on board. Here we will discuss fast initialization for DDR VIP models.
You can learn more about Synopsys Memory VIP here.
As per the JEDEC standard JESD79-4, Section 3.3.1, RESET_n needs to be maintained for a minimum of 200us. In simulation time, this value is a very long time. Furthermore, if the user’s testbench violates this timing, the Memory VIP will flag it as a UVM_ERROR and fail the simulation. Even though this violation is flagged as an error, it doesn’t affect the behavior of the VIP model.
There are a number of ways to get around this violation. In this blog, we will discuss one of these ways.
The Synopsys Memory VIP has an initialization feature called Fast Initialization, also known as, scaled down initialization. The intention of this feature is to allow control for overriding the initialization parameters to speed up the initialization process. The new values, whether they are set by default or customized by the user, enable faster initialization times without asserting any checker violations. Also, it doesn’t affect the initialization behavior of the model. This feature is only available for front door access – vs. backdoor access. We will discuss types of Memory VIP access in subsequent blog posts.
There are two ways to scale down the initialization parameters. One is set by using default values, and another by customization.
As per the standard, the following are the expected values:
min_cke_high_after_reset_deasserted_in_pu_and_res_init_time_ps = 500000000
min_reset_pulse_width_in_pu_ps = 200000000
Using the default approach, one may call the function “set_scaled_initialization_timings()” from the build_phase of the configuration object. That function call will scale down the timing parameters to the assigned values below without triggering checker violations:
min_cke_high_after_reset_deasserted_in_pu_and_res_init_time_ps = 500000
min_reset_pulse_width_in_pu_ps = 200000
To customize the values, the user may set their own customized values and then set the flag “scaled_timing_flag”. The VIP will get configured to the user provided values. As such:
For Discrete Devices:
// cfg handle of the svt_ddr_configuration class
// Pass the cfg to the DDR Discrete Device component by using // the config_db mechanism.
cfg.timing_cfg.min_cke_high_after_reset_deasserted_in_pu_and_res_init_time_ps = 500000;
cfg.timing_cfg.min_reset_pulse_width_in_pu_ps = 200000;
cfg.timing_cfg. tPW_RESET_ps = 100000;
cfg.timing_cfg.scaled_timing_flag = 1;
For DIMM Models:
// dimm_cfg is handle of svt_ddr_dimm_configuration
dimm_cfg.data_lane_cfg[i].rank_cfg[j].timing_cfg.min_cke_high_after_reset_deasserted_in_pu_and_res_init_time_ps = 500000;
dimm_cfg.data_lane_cfg[i].rank_cfg[j].timing_cfg.min_reset_pulse_width_in_pu_ps = 200000;
dimm_cfg.data_lane_cfg[i].rank_cfg[j].timing_cfg.tPW_RESET_ps = 100000;
dimm_cfg.data_lane_cfg[i].rank_cfg[j].timing_cfg.scaled_timing_flag = 1;
Authored by Nasib Naser
You can learn more about Synopsys Memory VIP here.
Posted in DDR, LPDDR, Memory, Processor Subsystems | Comments Off
Posted by VIP Experts on August 11th, 2015
In a recent post, Paul Graykowski introduced Synopsys VIP for PCIe Gen4.
To dive deeper into the verification closure process, you can now register for our Webinar on August 14th here.
Today’s PCIe verification engineers have to tradeoff between verification completeness and shrinking to market complicated even further with the new Gen4 specification. Synopsys VC VIP for PCIe, fully compliant to latest version of the Gen4 specification, can solve the riddle of completing verification while keeping with the tight schedules.
This webinar will highlight enhancements to the PCIe specifications (Gen 1, 2, 3 and 4) as reported by PCI-SIG, and provide an overview of the complete PCIe Solution offered by Synopsys – Controller, PHY and VIP. It will then dive deeper into Synopsys Verification IP offering, including Test Suites, built-in error injection, passive monitor, and it will also touch on NVMe support. We will conclude with a demo using Verdi Protocol Analyzer to demonstrate advance features for debugging complex verification scenarios.
Web event: Learn How to Accelerate Verification Closure with PCIe Gen4 VIP
Date: August 19, 2015
Time:10:00 AM PDT
Posted in Data Center, Debug, Methodology, PCIe | Comments Off
| © 2015 Synopsys, Inc. All Rights Reserved.