VIP Central

First Ethernet 400G VIP to Enable Next-Gen Networking and Communications SoCs

Posted by VIP Experts on November 24th, 2015

On Monday, Synopsys announced the availability of the industry’s first verification IP (VIP) and source code test suite to support the proposed IEEE P802.3bs/D1.0 Ethernet 400G standard (400GbE). To understand how it will enable next generation networking and communication systems, we take a look at the evolution of the Ethernet.

Evolution of the Ethernet

Ethernet was first commercially introduced in 1980. It was originally used to connect personal computers to other computers and shared resources like printers in an office.  Ethernet continued to grow to cover campuses and data centers, and then to connect these over metropolitan area networks (MAN) and wide area networks (WAN). This evolution of connectivity followed the 10X speed jumps from the 1980s to 2010 (10M, 100M, 1G, 10G, 40G and 100G) until we reached 100GbE. When the industry saw the challenges of making 100GbE affordable, the industry developed 40GbE as an interim, lower-cost step. 40GbE opened the door for non-10X steps in speed, including 25G, 50G and 400G.

Birthing a new generation of Ethernet was quite straightforward in the early years: enterprises wanted faster LANs. Vendors figured out ways to achieve that throughput. IT shops bought the speed boost with their next computers and switches. It is a lot more complicated now with carriers, Web 2.0 giants, cloud providers and enterprises all looking for different speeds and interfaces. Facebook, for instance, said in 2010 that it already had a need for Terabit Ethernet in its data centers. With billions of Ethernet devices in use on networks around the world, it is harder to define a specification that satisfies everyone.

Modern networks are built on a hierarchy of devices, with “deeper” switches or routers networking the edge devices together for full connectivity. This has encouraged the use of successively faster trunks as port speeds have increased. More traffic implies more capacity. But traffic does not impact the WAN or LAN uniformly—and therefore the needs may be vastly different in different types of networks.

Cloud computing encourages the creation of dense, highly connected data centers. Cloud applications often have more components, and are horizontally integrated than traditional applications, which makes traditional multi-tiered LAN switching performance more problematic. In a cloud data center, even 10 GbE server/storage interfaces connected in a four- or five-layer structure might drive switch interface speeds to 400G or more in the deeper layers. When clouds are created by linking multiple data centers over fiber, a super-Ethernet connection is almost inevitable. This is where we need faster Ethernet switches: to connect cloud data centers and support optical metro aggregation and OTN-based cloud core networks.

“The 400GbE development effort, started in the IEEE 802.3 Ethernet Working Group back in March 2013, remains driven by the need to provide higher speed solutions for core networking applications that depend on the aggregation of data.” said John D’Ambrosia, earlier this year in Enterprise Networking Planet. John is the Chairman of the Ethernet Alliance and Chief Ethernet Evangelist in the CTO office at Dell. In November 2013, the IEEE’s 400 Gb/s Ethernet Study Group” approved project objectives for four different link distances of 400GbE. These were approved by IEEE 802.3 Working Group in March 2014.

Last week, Facebook announced it is testing a 100 Gbit/second top-of-rack Ethernet switch for its next-generation data centers. Networking hardware vendors, like Cisco, Arista, and Mellanox, already offer 100 GbE switches. 

Enabling Next-Gen Networking and Communication SoCs

As the need for increased bandwidth to support video-on-demand, social networking and cloud services continues to rise, Synopsys VC VIP for Ethernet 400G enables system-on-chip (SoC) teams to design next-generation networking chips for data centers with ease of use and integration.


Synopsys VC VIP for Ethernet uses a native SystemVerilog Universal Verification Methodology (UVM) architecture, protocol-aware debug and source code test suites. Synopsys VC VIP is capable of switching speed configurations dynamically at run time, and includes an extensive and customizable set of frame generation and error injection capabilities. In addition, source code UNH-IOL test suites are also available for key Ethernet features and clauses, allowing teams to quickly jumpstart their own custom testing and speed up verification time.

Synopsys thus provides a comprehensive Ethernet solution for all speeds, including 25G, 40G, 50G, 100G and the newest 400G standards.

You can learn more about Synopsys VC VIP for Ethernet and  source code UNH-IOL test suites here.

Posted in Data Center, Ethernet, Methodology, SystemVerilog, Test Suites, UVM | No Comments »

Accelerate your MIPI CSI-2 Verification with a Divide and Conquer Approach

Posted by VIP Experts on November 19th, 2015

MIPI Alliance’s CSI-2 (Camera Serial Interface) has achieved widespread adoption in the smartphone industry for its ease-of-use and ability to support a broad range of imaging solutions. MIPI CSI-2 v1.3, which was announced in February 2015, also offers users the opportunity to operate CSI-2 on either of two physical layer specifications: MIPI D-PHY, which CSI-2 has used traditionally, as well as MIPI C-PHY, a new PHY that MIPI first released in September 2014. Products may implement CSI-2 solutions using either or both PHYs in the same design. MIPI CSI-2 v1.3 with C-PHY provides performance gains and increased bandwidth delivery for realizing higher resolution, better color depth, and higher frame rates on image sensors while providing pin compatibility with MIPI D-PHY.

MIPI CSI-2 poses unique verification and debugging challenges: multiple images formats, several different image resolutions, multiple virtual channels, different types of long packets and short packets, error injection scenarios, ultra-low power mode, and support for MIPI C-PHY and D-PHY. Since MIPI CSI-2 is considered a mature technology – it has been around for a decade – it also demands a short time to market cycle. So how should you as a developer meet the challenges of increasing complexity along with shortening schedules?

Your verification schedule can be significantly cut down when you use Synopsys’ built-in MIPI CSI-2 test suites, monitors and coverage models along with our CSI-2 VIP. Test sequences and scoreboard are customizable. Coupled with the protocol analyzer, it further enables you to cut down the debug cycles, which is another big bottleneck in achieving functional closure.


You can learn more about how Synopsys’ MIPI CSI-2 customizable test suite with the coverage model can accelerate your CSI-2 verification by downloading one of our customer case studies here. This article describes a divide-and-conquer approach that enabled them to verify the MIPI PHY and the MIPI MAC separately. They also discuss how the scoreboard and IDI monitor provided good compatibility to work with their design’s custom interface on the application side. Also, the highly configurable architecture of the VIP and test suites will enable them to reuse their entire testbench for future updates of the design as well as updates to MIPI specifications.

Here’s where you can learn more about Synopsys’ VC Verification IP for MIPI CSI-2 and CSI-2 Test Suites, and the customer case study for using a divide-and-conquer approach.

Authored by Anand Shirahatti

Posted in C-PHY, Camera, CSI, D-PHY, MIPI | No Comments »

Synopsys AMBA 5 AHB5 Verification IP: What’s It All About?

Posted by VIP Experts on November 11th, 2015

This week, at ARM Techcon 2015, Synopsys announced the availability of our VC Verification IP for the new ARM AMBA 5 Advanced High-Performance Bus 5 (AHB5) interconnect. The AHB5 protocol is an update to the widely adopted AMBA 3 AHB3 specification. It extends the TrustZone security foundation from the processor to the entire system for embedded designs. AHB5 supports the newly announced ARMv8-M architecture which drives security into the hardware layer to ensure developers have a fast and efficient way of protecting any embedded or Internet of Things (IoT) device.

AHB5 can enable high-performance multi-master systems with support for exclusive transfers and additional memory attributes for seamless cache integration.  It adds multiple logical interfaces for a single slave interface so you can address multiple peripherals over one bus.

The new AHB5 protocol also enables closer alignment with the AMBA 4 AXI protocol, enabling easier integration of AXI and AHB5 systems. AHB5 also adds support for secure/non-secure signaling so peripherals can keep state correctly. AHB5 also adds support for user signals.


The existing AHB system environment, which is part of the AMBA system environment, supports AMBA AHB2 and AHB3-Lite. Now, we have extended support for AHB5 protocol. Users simply have to change a few configuration attributes and required signal connections for that configuration.

In addition, Synopsys offers advanced system-level capabilities for the ARM AMBA 5 CHI and AMBA 4 ACE protocols. The AMBA 5 CHI is an architecture for system scalability in enterprise SoCs, while AMBA 4 ACE is used for full coherency between processors. The expanded capabilities of Synopsys VIP include system level test-suites, a system monitor, protocol-aware debug and performance analysis. With the growth of cache-coherent designs, checkers and performance analysis are required. The system-level capabilities of Synopsys VIP enable SoC teams to further accelerate time to first test and improve overall verification productivity.

Synopsys VIP features SystemVerilog source code test-suites, which include system-level coverage for accelerated verification closure. The VIP now also offers performance measurement metrics for in-depth analysis of throughput, latency and bottlenecks across cache coherent ports. Synopsys VIP also features system monitors, which interact with other VIP to ensure cache coherency across the system, accurate protocol behavior and data integrity.

To learn more, register for our webinar on November 18th: A Holistic Approach to Verification: Synopsys VIP for ARM AMBA Cache Coherent Interconnects on VIP support for ARM Cache Coherent Interconnects.

Posted in AMBA, SystemVerilog, Test Suites | No Comments »

ARM TechCon: Optimize SoC Performance with Synopsys Verification IP and Verdi Unified Debug

Posted by VIP Experts on November 5th, 2015

Companies developing complex ARM-based SoC designs have to constantly keep up with evolving interface standards and proliferating protocols a recurring problem that is resource-intensive and time-consuming. Orchestrating these multiple protocols is critical to extracting maximum SoC performance a key competitive differentiator. Achieving high performance while ensuring correct protocol behavior is best addressed by a combination of transaction-based, protocol-aware verification and debug environments. Synopsys VIP coupled with the Verdi unified debug platform spans verification planning, simulation debug, coverage, HW-SW debug and emulation debug, and helps tackle this challenge end-to-end.


This training session at ARM TechCon 2015 explains how Verdi’s native Protocol Analyzer and Memory Protocol Analyzer help ensure protocol correctness, and how Verdi Performance Analyzer measures individual protocol performance on Synopsys VIP such as AMBA CHI, ACE, AXI. Additionally, the presentation addresses how these advanced capabilities lend themselves to analyzing performance on non-standard SoC interfaces. Capturing dataflow with the Verdi ‘VC apps’ API allows observation of the entire test environment and measures performance at interfaces of interest. Verdi provides a unified environment for designers to analyze this information, uncover actionable insights and drive design decisions to maximize SoC performance.

Please join us at ARM TechCon on Wednesday, November 11.

SpeakerJohn Elliott  |  Sr. Staff Engineer, Synopsys
Location:  Mission City Ballroom M1
Date:  Wednesday, November 11
Time:  2:30pm – 3:20pm

Posted in AMBA, CHI, Debug, Methodology | No Comments »

PCIe Gen4 – VIP/IP Solution with Protocol-Aware Debug and Source Code Test Suites

Posted by VIP Experts on October 27th, 2015

Today’s PCIe verification engineers have to trade-off between verification completeness and demanding time to market, and the new Gen4 specification makes it more challenging.  This video highlights Synopsys’ complete PCIe Gen4 solution that includes implementation IP (Controller/PHY), Verification IP, protocol-aware debug and source code test suites to accelerate verification closure.

Here’s where you can learn more about Synopsys’ VC Verification IP for PCIe and PCIe Test Suites.

Posted in Debug, Methodology, PCIe, SystemVerilog, Test Suites, UVM | No Comments »

Synopsys NVMe VIP Architecture: The Host Protocol Layers

Posted by VIP Experts on October 20th, 2015

Our previous post on NVMe was an overview of the NVMe protocol. We will now start looking closer at the VIP-proper, looking initially at the NVMe Host Protocol layers. This will provide an introductory overview of sending commands to the NVMe Controller.

Here’s where you can learn more about Synopsys’ VC Verification IP for NVMe and for PCIe.

 A High-Level View of the VIP

There are several major blocks to the VIP as shown here:


The NVMe VIP Host Methodology Layers

The UVM Methodology Interface – this allows users and their test-cases to control, monitor and request commands of the NVMe Host VIP via the transaction-oriented UVM methodology. Sequencer and Register models provide access to the VIP.

The NVMe VIP Host Protocol Layers

This implements the NVMe Host-side Protocol – everything from creating data structures (e.g. queues, PRPs and SGLs) in Host Memory to pushing requests into queues to ringing doorbells and fielding interrupts and popping completion queues.

The NVMe Controller VIP

This is the NVMe Controller Model – it responds to the register accesses sent to it, including reads/writes of the various configuration and control registers, handling doorbells, reading and writing Host Memory (to access queues and data) and essentially implementing the NVMe Controller side specification.

In this post, we will be concentrating on the NVMe VIP Host Protocol Layers (in the above Figure, this is the area surrounded by the dashed line.)

Layers of the NVMe VIP Host

Although in the above diagram, the Host Protocol Layer is shown as part of a larger VIP system, it can also be considered as a standalone VIP in its own right. We will, in fact, describe the various layers of the VIP in this way, hiding the UVM methodology, PCIe protocol and the NVMe Controller as much as possible.

A quick review of the NVMe protocol will help us explain the use of the various layers; we’ll go over some examples of NVMe configuration, control and commands, emphasizing those layers that are involved. Here are the layers that we’ll be discussing:


There are only three layers we will be dealing directly with, but to start out, we will use a trivial NVMe Test Case to help use explain the function of the various layers. The VIP Host Layer has a simple-to-use Verilog-based command interface – the various NVMe configuration and commands are mapped to Verilog tasks/functions to implement the NVMe Host. Note that although the command interface is easy to use, under the covers this is a full NVMe application and driver layer that handles much of the protocol leg-work where a “BFM” would be out of its league.

Here’s our trivial test-case (we are not showing some of the task arguments or checking error status just as a simplification – our plan here is to describe the VIP functionality in terms of the test-case commands.) On with it…

// We will assume that the PCIe stack is setup and running
bit [63:0] base_addr = 32’h0001_0000;	// Ctlr NVMe BAR base addr
// Tell the host where the controller has its base address
AllocateControllerID(base_addr, ctlr_id, status);
// Create the Admin Completion and Submission Queues
ScriptCreateAdminCplQ(ctlr_id, num_q_entries, status);
ScriptCreateAdminSubQ(ctlr_id, num_q_entries, status);
// Send an Identify Controller Command
data_buf_t #(dword_t) identify_buffer;		// identify data
identify_buffer = new(1024);
ScriptIdentify(ctlr_id, 0, identify_buffer, status);

Ok, enough for now. A few comments on the above test-case – these are the actual tasks to call to accomplish the various configuration and commands (minus a few arguments as mentioned above). The various tasks that start with the word Script are actual NVMe commands. If they don’t start with Script, they are a VIP configuration utility (e.g. AllocateControllerID() ).

All these commands are implemented at the above NVMe Command Layer (denoted in Red in the figures) – this is the Verilog Interface Command Layer.

We start with the AllocateControllerID(base_addr, ctlr_id, status) task call. This generates a request to the NVMe Queueing Layer to build us a data structure that keeps track of our attached controller(s). The returned ctlr_id is used as a “handle” for any communication to that controller. You will note that later NVMe commands (prefixed by Script…) use the ctlr_id to determine the destination of the command. One can call AllocateControllerID() each time for as many controllers as one wants to access, an unique handle will be returned for each.

Once we have gotten the handle to communicate with the Controller, we then can use it – we call ScriptCreateAdminCplQ(ctlr_id, num_q_entries, status) to do several things for us (see diagram below):

  • In the NVMe Queuing Layer, we allocate some memory from the pool of NVMe memory and create a data structure in Host Memory: a Completion Queue of the requested depth.
  • The register transactions for the appropriate (ACQS and ACQB) registers are built. (Note that Admin Queues are built by writing to the Controller’s NVMe register space).
  • The registers are written. This is done by creating appropriate PCIe MemWr TLP transactions in the NVMe Protocol Interface Layer which are then sent to the PCIe Transaction Layer to cause a write to the appropriate register(s) on the controller.


The Admin Submission Queues are created analogously with ScriptCreateAdminSubQ(). Note that the host-side memory management is done for you, as well as managing the associated queue head and tail pointers. In addition the VIP is checking the memory accesses to those queues to make sure they follow the spec (e.g. a submission queue should not be written to by the controller.)

Once the Admin Queues have been built, we can now use them to communicate admin NVMe commands to the controller. In this case, we will call the NVMe Identify Controller command, used to gather detailed information about the controller. The ScriptIdentify() task (see figure below) is used for both Identify Controller and (if the first argument is non-zero) and for Identify Namespace. Since the Identify commands return a 4KB (1024 dword) buffer of identify information, we allocate that prior to calling the task.

Since the Identify command requires a memory buffer to exist in host memory (to hold the contents of the Identify data), that is allocated in host memory and passed as a buffer in the submitted command. Once the controller receives the command (by reading the Admin Submission Queue), it executes the Identify command, and uses the underlying (PCIe) transport to move the data from the controller to the host.


Once the command has been completed, and the host has retrieved the completion queue entry (and verified the status), the host can then copy that buffer data from host memory to the identify_buffer provided by the user. Note that the VIP is taking care of building all the appropriate data structures and also generates (and responds to) the associated transactions to execute the command while all the time monitoring the protocol.


We’ve gone over the basic layers and architecture of the Synopsys NVMe VIP, and you should now have an idea of how NVMe commands are sent to the controller via those layers. More detail follows in upcoming episodes, including more VIP details and features, more advanced topics in the NVMe protocol and the use of other Verification Methodologies (such as UVM) to configure, control, monitor and submit commands with the VIP.

Thanks again for browsing, see you next time!

Authored by Eric Peterson

Here’s where you can learn more about Synopsys’ VC Verification IP for NVMe and for PCIe.

Posted in Methodology, NVMe, PCIe, UVM | No Comments »

MIPI UniPro: Major Differentiating Features, Benefits and Verification Challenges

Posted by VIP Experts on October 13th, 2015

MIPI UniPro is a recent addition to mobile chip-to-chip interconnect technology. It’s got many useful features to meet the requirements of mobile applications. That’s perhaps why Google’s Project Ara has selected MIPI UniPro and MIPI M-PHY as its backbone interconnects.

In this blog post, we describe three differentiating features, benefits and their verification challenges. All the discussion is referenced to MIPI UniPro 1.6.

  1. Achieving Low power consumption through Power mode changes and hibernation
  2. Flexibility in chip-to-chip lane routing through Physical Lane mapping
  3. Enhanced QoS through CPort arbitration & Data link layer pre-emption

You can learn more about our VC VIP for Unipro and M-PHY here.

1. Achieving Low power consumption through Power mode changes and hibernation


MIPI UniPro provides six power modes to meet different needs. In SLOW mode, it supports seven gears with operational speed ranging from 3Mbps to 576Mbps per lane. In FAST mode, it supports three gears with operational speed ranging from 1.5Gbps to 6Gbps per lane. Both SLOW and FAST can be coupled with automatic M-PHY’s BURST closure during traffic gaps called AUTO. In the complete absence of traffic hibernate mode is used. All unconnected lanes shall be put in OFF mode. UniPro allows independent power mode settings for both transmit and receive direction.

UniPro allows the dynamic selection of number of lanes, gear and power mode per direction using Power mode change request (DME_POWERMODE) and hibernate state transitions through (DME_HIBERNATE_ENTER and DME_HIBERNATE_EXIT) primitives. MIPI UniPro L1.5 Layer accomplishes these requests through the PHY Adapter Configuration Protocol (PACP) frames of the type PACP_Pwr_Req and PACP_PWR_Cnf.  Traffic is paused briefly during the power mode change procedure. Power mode settings are applied simultaneously after completion of power mode change procedure on both ends and traffic is resumed.


This feature allows MIPI UniPro in achieving optimal “performance per watt” of power through setting of appropriate power mode. Based on the application’s data traffic bandwidth and latency requirements, it can scale the number of lanes and operational speed of lanes in each direction dynamically.

Verification Challenge

Following parameters gives rise to large state space

  • 6 different power modes
  • 7 gears in SLOW mode and 3 gears in the FAST mode
  • Up to 4 lanes and it can be scaled down to any value
  • Asymmetric setting of the mode, gear and lanes in both direction

Functional verification will have to cover the unique combination of all of the above power mode state space (mode x lane x gear). Additionally two more important transition combinations have to be covered:

  • Transitions from one possible unique combination of power mode to another possible unique combination (~1600 combinations)
  • Hibernate entry and exit from each of the unique power mode state

This would require a constrained random stimulus support. The constrained random stimulus generation is not quite straight forward. It will have to take in to consideration:

  • Current power mode state
  • Capabilities of the both the peer and local device

Based on above parameters the legal power mode changes will have to be initiated from both the VIP and DUT side.

2. Flexibility in chip-to-chip lane routing through Physical Lane mapping


UniPro allows using multiple lanes (max up to 4) to scale the bandwidth. UniPro Phy adapter layer takes care of distribution and merging of data. During the L1.5 layer’s multi-phase initialization sequence the total number of lanes connected and their physical to logical lane mapping gets determined.


Training sequence identifying the logical and physical lane mapping. Source: MIPI


This feature provides the flexibility in the UniPro’s chip-to-chip lanes routing. Considering the small footprint requirement for mobile hardware this will surely ease printed circuit board designer’s life.

Verification challenge

From verification point of view need to cover the following:

  • Different number of lanes connected, and
  • Every physical lane getting mapped to every possible logical lane

Typically through configuration, the number of lanes connected and for connected lanes the logical to physical mapping used needs be randomized. Based on this configuration, the VIP will drive the specified number of lanes and advertise appropriately to the DUT.

3. Enhanced QoS through CPort arbitration & Data link layer pre-emption


MIPI UniPro supports two traffic classes traffic class 0 (TC0) and traffic class 1 (TC1). Traffic class 0 support is mandatory while traffic class 1 support is optional. Priority based arbitration between traffic classes is supported. The MIPI UniPro stack, right from its transport Layer L4 to the data link layer L2, is traffic class aware to provide enhanced Quality of Service (QoS).

At the transport layer level, the logical data connection is the connection oriented port (CPort). It is mapped to either TC0 or TC1. Cports mapped to higher priority traffic class will have precedence over CPorts mapped to lower-priority traffic class. Within a traffic class, segment level round robin is the default arbitration scheme.

To reduce delays and to improve Quality of service (QoS) at the data link layer level, it can insert high priority frames within a lower priority data frame under transmission. This feature is called pre-emption. It’s an optional feature. This concept is extended to other control frames as well for improving the latency and reducing the bandwidth wastage during retransmission.


Composition with pre-emption (Traffic class Y > X). Source: MIPI


CPort arbitration and pre-emption provide fine control over latency of communication. This enables improved QoS. This feature can be used for latency sensitive traffic.

Verification challenge

From the verification point of view, we need to address:

  • Meeting the overall intent of QoS feature
  • Ensuring that the pre-emption feature is functionally implemented correctly

QoS feature intent can be verified by measuring the latency on both transmit and receive path of the DUT. This can be done as additional functionality of the scoreboard. The scoreboard can record the time stamp of the messages entering and exiting ports of DUT on both CPort and serial line. The latency of transmit and receive path of DUT can be checked against the desired configured value. Any violations can be flagged as warnings or errors based on the percentage violations.

To ensure that the pre-emption feature is functional, both the legal and illegal pre-emption cases needs to be exercised. Based on the supported priorities table for DL arbitration scheme, there are 18 illegal and 35 legal pre-emption scenarios possible. Both legal and illegal cases must be covered on both transmit and receive path of the DUT including multi-level pre-emptions.

For all these features verification, a well-architected Verification IP plays a critical role. Verification IP with right level of flexibility and control can significantly accelerate the closure of verification.

Authored by Anand Shirahatti, Divyang Mali, Naveen G

You can learn more about our VC VIP for Unipro and M-PHY here.

Posted in Methodology, MIPI, Mobile SoC, MPHY, Unipro | No Comments »

Get Ready for IoT with Synopsys PCIe VC Verification IP Workshop

Posted by VIP Experts on October 6th, 2015

Internet of Things (IoT) is connecting billions of intelligent “things” to our fingertips. The ability to sense countless amounts of information that communicates to the cloud is driving innovation into IoT applications. Servers powering the cloud will have to scale to handle these billions of intelligent things. As a preparation to that PCIe Gen 4 has been introduced. It is capable of supporting 16 T transfers/s. Current primary market driver for the PCIe Gen4 application seems to be server storage space.

Also PCI-SIG and MIPI are collaborating on supporting MIPI MPHY with PCIe: MPCIe is a version of the PCIe protocol for the mobile interconnect.

PCIe had its own evolution with Gen1, Gen2, Gen3 and now Gen4. With every new generation, the speed has doubled and so is the increase in complexity. A proven PCIe Verification IP with support for all the speeds can significantly reduce the verification schedule. If such a Verification IP is also bundled with test suite and coverage suite it can certainly reduce the risk of verification. What if such Verification IP also comes bundled with support for protocol aware debug in Verdi?

Synopsys offers all these features in a single PCIe VC Verification VIP offering:

  • Support for Gen1, Gen2, Gen3 and Gen 4 speeds
  • Support for MPCIe
  • Supporting for the NVMe application
  • Includes a bundled test suite
  • Built-in support for protocol-aware debug in Verdi


Come experience our product hands-on through a PCIe Workshop in your region. This workshop will provide you a unique opportunity to learn about:

  • Ease of programming interface of VC Verification IP for doing normal transfers, error injection and low power scenarios
  • Various outputs generated by the VC Verification IP for debug, Learn to use different abstraction of debug information for different level of debug from signals to text logs to protocol aware debugs within single Verification IP
  • How to integrate the user DUT in to the test suite environment and get it going quickly

Recently, PCIe workshops were held in Mountain View, California, and Bangalore, India. Participants in these workshops told us that they loved the new feature of Verdi to facilitate protocol aware debug and coverage back annotations. Error injection capabilities coupled with various debug capabilities at each layer gave them the confidence to left-shift verification closure. 

Free registration is now open for Shanghai-China, Tokyo-Japan, Austin-Texas and Herzelia-Israel.

Authored by Sadiya Ahmed, Anunay Bajaj and Anand Shirahatti

Posted in Debug, Methodology, MIPI, MPHY, PCIe, SystemVerilog | Comments Off

Protocol Debug for Complex SoCs

Posted by VIP Experts on September 29th, 2015

Here, Bernie DeLay, group director for Verification IP R&D at Synopsys, talks to Ed Sperling of Semiconductor Engineering about the challenges of debugging protocols in complex SoCs.

You can learn more about our VIPs at Verification IP Overview.

Posted in AMBA, DDR, Debug, Memory, Methodology, PCIe, Processor Subsystems, Storage, USB | Comments Off

MIPI UniPro for PCIe Veterans

Posted by VIP Experts on September 22nd, 2015

The MIPI Unified Protocol (UniPro) specification defines a layered protocol for interconnecting devices and components within mobile device systems. It is applicable to a wide range of component types including application processors, co-processors, and modems. MIPI UniPro powers the JEDEC UFS, MIPI DSI2 and MIPI CSI3 applications. As of now, MIPI UniPro has been adopted the most in the mobile storage segment through JEDEC UFS. Adoption of MIPI UniPro  and MIPI M-PHY provides lower power and higher performance solutions.

You can learn more about our VC VIP for UniproM-PHYPCIe and JEDEC UFS here.

Many PCIe veterans may already have begun implementing MIPI UniPro in their designs. This blog post takes you through a quick view of the MIPI UniPro and MIPI M-PHY stack from a PCIe perspective. As you will notice, there are many similarities.


PCI Express provides switch based point-to-point connection for connecting chips. MIPI UniPro also does the same. The current UniPro 1.6 specification does not support the switch though. It’s planned for future revisions. Toshiba has already released detailed technical documentation of UniPro bridge and switch for supporting the Google Ara. Both are packet switched high-speed serial protocols.



PCI Express


Transport Layer

PCI Express maintains backward compatibility and uses the load and store model of the PC world. Configuration, Memory, IO and message address space access are supported at the transaction level. MIPI UniPro, on the other hand, is brand new and hence, does not have to carry the burden of ensuring backward compatibility. UniPro provides a raw communication of data in the form of messages. There is no structure to messages.

Both PCI express and UniPro support the concept of multiple logical data streams at the transport level. PCI express supports the concept of Traffic class (TCs) and Virtual Channels (VCs).  A maximum of 8 TCs are supported. TCs can be mapped to different Virtual Channels (VCs). This concept of TCs and VCs is targeted towards providing deterministic bandwidth and latency.

UniPro also has a similar concept. PCI Express Traffic class (TC) is equivalent to a MIPI UniPro Cport and the PCI Express Virtual Channel(VC) is equivalent to a UniPro Traffic Class(TC). (Yes you noticed the use of TC terminology right. Both use same term but meanings are different) CPorts are bidirectional logical channels. Each CPort has a set of properties, which characterize the service provided. Multiple CPorts can be mapped to a single TC. UniPro 1.6 supports two TCs – TC0 and TC1. TC1 has higher priority than TC0. Additionally the UniPro provides End-to-End (E2E) Flow control at transport level. Note that UniPro E2E flow control is meant for application level buffer management and not for the transport layer buffers. While PCI Express implements the flow control at transport level as well.

PCI Express transport layer implements the end to end CRC (ECRC) and data poisoning. PCI Express has higher sensitivity to error detection at transport layer than the MIPI UniPro 1.6. This along with Advanced error reporting (AER) is what differentiates the PCI express to be used in the Server space where high reliability, accessibility and serviceability are valued.

Network Layer

PCI Express does not have a separate network layer. MIPI UniPro has a very simple pass through network layer in Version 1.6.

Data Link Layer

PCI Express data link layer is almost similar to that of the MIPI UniPro. Both solve the same problem of providing a means of robust link communication to the next immediate hop. They use similar error recovery mechanisms such as CRC protection, retransmission using NAC acknowledgement, multiple outstanding unacknowledged packets tracked using sequence numbers and flow control using the credits. While PCI Express credits are managed for Posted, Non posted and completion headers along with data, the UniPro flow control is per Traffic class(TC). In UniPro, both the credits and sequence numbers are independently managed per traffic class.

MIPI UniPro additionally supports the concept of pre-emption where in a high priority frame can pre-empt a low priority frame. This enables UniPro to provide even higher levels of latency determinism than PCI Express.

PCI Express data link layer supports low-level power management that is controlled by the hardware. The link power states are L0, L0s, L1, L2 and L3. L0 is full-on state and L3 is Link-off state. UniPro pushes it to the Physical adapter layer.

Physical Layer

PCI Express uses differential signaling and embedded clock. It can support multiple lanes up to 32 lanes. Reset and Initialization mechanisms enable determination of Link Speed, Link width and Lane mapping. Uses the 8B10B and 128/130 bit encoding, scrambling and deskew patterns aiding the clock recovery.

UniPro has gone one step ahead and partitioned the functionality further here. The UniPro Physical layer is divided into two sub layers Physical Adapter layer (L1.5) and Physical layer (L1). MIPI has multiple physical layer specifications. They are D-PHY and the more recent M-PHY. UniPro 1.6 will only be used with the M-PHY, though. The real role of L1.5 is to abstract the higher layer from the physical layer technology. UniPro is designed to use up to 4 M-PHY lanes. Reset and initialization mechanisms are used to determine the capabilities, similar to PCI Express.

MIPI UniPro supports higher power optimization. It is done by dividing Speed into two categories: High speed and Low speed. These are further sub-divided in to Gears. Dynamically, the speed and number of lanes can be scaled based on the bandwidth requirements. This process of changing the speed is called a power mode change. The application software initiates power mode changes. When the link is not under use it can be put in to hibernate state resulting in the highest power savings. The Physical adapter layer can also autonomously save power by ending the active data burst and entering sleep or stall states.

M-PHY also uses differential signaling and embedded clock in one of its modes of operation. 8B10B, 128/130 bit encoding, scrambling and deskew patterns are used to aid the clock recovery in high speed mode of operation.


We were awed by the similarities. No wonder there are initiatives like Mobile PCI Express(M-PCIe) — allowing the PCI Express to operate over M-PHY makes sense.

Similar comparisons can be made between MIPI UniPro and the Super Speed USB 3.0. Hence we are beginning to see initiatives being taken to enable the Super speed Inter-chip (SSIC ) with M-PHY.

It will be interesting to see how these will evolve, and which one of these will emerge victorious. While we wait for the UniPro vs. M-PCIE battle to settle down, one thing is clear: M-PHY has proved itself as a clear winner.

Authored by Anand Shirahatti

You can learn more about our VC VIP for Unipro, M-PHYPCIe and JEDEC UFS here.

Posted in Data Center, Interface Subsystems, Methodology, MIPI, Mobile SoC, MPHY, PCIe, Unipro | Comments Off