By Vikas Gautam, Vice President, R&D, Synopsys System Design Group
As the growth of new data-intensive applications and computing workloads continues to accelerate, so has the emergence of different interconnect standards and protocols that manage the dataflow. This has significantly increased chip complexity for design and verification teams. With prevailing pressures of rigid time-to-market windows and elevated design differentiation required from systems-on-chip (SoCs), protocol conformance has become an increasingly important priority on every chipmaker’s agenda.
Most of the interface protocol standards designed today — be it PCI Express® 6.0 (PCIe®), Compute Express Link ™ (CXL™), 800G Ethernet, or High Bandwidth Memory (HBM3) — are largely driven by the growth in data center, cloud computing, and artificial intelligence (AI) applications. Such protocols allow for high throughput, low latency, and power-efficient external connectivity in SoCs that drive performance improvements by orders of magnitude across dimensions, including increased data rates and cache coherence between chips. The economics of designing large SoCs with the goal of packing more functions onto a single chip package is driving chiplet-based designs and the need for die-to-die standards such as Universal Chiplet Interconnect Express (UCIe).
Naturally, it’s quite challenging to verify the functional accuracy and protocol compliance of these SoC designs filled with blocks of commercial or in-house developed IP that are based on complex, industry-standard interface protocols.
Read on to learn more about key protocol verification challenges teams face today, the shift in chipmakers’ mindset towards electronic design automation (EDA) companies, why advanced protocol verification is key to bulletproof designs, and how a robust ecosystem can drive SoC innovation.
Think of protocol as any other spoken language. If two people speak the same language, they can communicate effectively. Similarly, if two hardware devices support the same protocol, they can communicate with each other seamlessly, regardless of the vendor who manufactured it or the specific type or function of the device.
In a typical chip design, data flows into or between chips and/or systems that house them. From there, it must be routed correctly, processed in compliance with a protocol specification, and sent along for further processing, analysis, storage, or display. This set of rules that govern the communication of how data is transmitted, what commands are used, and how transfers are confirmed may sound simple conceptually, but the implementation gets extremely sophisticated very quickly.
What adds to the difficulty is that there are many different protocols in the industry, most of which are continuing to evolve at breakneck speed. Design teams often need to kickstart the design cycle and work on multiple IP configurations before a chosen standard has been completely ratified.
In the case of PCIe, design teams, for example, are dealing with thousands of possible configurations heavily related to each other, for which all data paths need to be designed and verified. Aside from extensive interoperability testing, the limited expertise of these new standards poses unwarranted problems. While there may be a handful of experts in the design team who can leverage pre-existing skillsets around these standards, it is uncommon for every verification team to have the right expertise to verify, debug, and analyze defective protocol components promptly. Moreover, several market players who model around these standards pursue the route of customizing their ‘’secret sauce’’ designs to exploit the degrees of freedom that each protocol provides and use that to create product differentiation.
For design and verification teams to effectively build high-performance SoC designs compliant with stringent protocol specifications, what is essential is the timely availability of end-to-end protocol verification solutions like verification IP, transactors, memory models, and system-level virtual and hardware-based connections that support various verification needs and varying configurations of each standard’s specification.
Everyone wants to stay on top of the latest version of a standard, and so the most successful protocols in the industry are those that maintain backward compatibility while advancing at the same time. For example, the well-known connector technology standard USB (Universal Serial Bus) allows for interoperability with an older legacy system or input and has gone through several evolutions in the last two decades.
From a silicon perspective, if you compare earlier versions of similar protocols with the latest versions, the specifications run in the thousands of pages, and implementations that once looked like smaller SoCs in terms of complexity now hold characteristics of separate subsystems rather than a simple interface IP.
Take the case of the recent upgrades in USB 3.x standards to couple of new generations of USB 4.0. USB 4.0 became a subsystem that is made up of USB, PCI, and a Display Port to enable seamless power transmission and transfer of data, video, and multimedia files. The shift to a subsystem architecture significantly increased the complexity to perform an exhaustive system-level verification of all IP models. Customers designing products with the 4.0 version in mind need to ensure that the interface can still support designs based on 3.0, 2.0, and 1.0 versions and interoperates with other USB-compliant devices.
In such data-hungry use cases, emulation becomes a critical enabler to ensure design and verification teams are validating designs in the presence of a complete software stack, device drivers, and the larger chip ecosystem, especially when performance is being measured on a cycle-to-cycle basis.
Developing high-quality verification IPs and transactors requires multiple years of effort, active participation in industry standards, and deep protocol skills from an experienced team, working closely with IP design team.
Most next-generation protocol designs enable data-intensive interconnects, which means there’s a lot of data that flows from the central processing unit (CPU) to memory interfaces and other dedicated components. Becoming an expert on each interconnect protocol won’t necessarily shorten verification schedules, but it can enhance design productivity and expose bugs that might otherwise only be found when used by the end consumer.
A growing number of semiconductor companies responsible for building high-performance computing chips no longer develop protocol IP in-house. They are instead becoming increasingly dependent on the maturity of IP provided by commercial IP and EDA companies.
The root cause for this market shift? EDA companies, like Synopsys, have an upper hand in terms of decades of acquired expertise and teams of specific protocol experts and experience in deploying a wide spectrum of protocol-compliant designs to multiple customers and early adopters. While chip companies have teams that can enable IP integration in-house, they need all the help they can get for exhaustive design and verification of pre-built IP blocks.
With our growing portfolio of industry-first Synopsys Verification IP (VIP) and close collaborations with standards organizations and memory vendors, we can help design and verification engineers access and integrate the latest interconnect technologies rapidly as well as perform system-level verification and test real-life scenarios.
Amid all the verification challenges that high-speed interfaces demand, a key market differentiator for Synopsys lies in our decades of experience with first-to-market IP and VIP customers and powerful verification expertise to deliver best-in-class VIP. Synopsys offers VIPs that are used in most designs based on different interfaces, including PCIe, CXL, UCIe, DDR, and USB, and are being used together with Synopsys VCS® functional verification solution and Synopsys Verdi® debug solution to enable hardware protocol verification.
To address the growing challenges of protocol conformance and advanced verification, Synopsys offers the industry’s broadest hardware-assisted verification portfolio (Synopsys ZeBu® emulation system and Synopsys HAPS® FPGA prototyping system) that includes:
The availability of protocol support in Synopsys ZeBu and Synopsys HAPS not only enable SoC hardware verification, but also help in pre-silicon development and validation of IP drivers, firmware, and software.
As we look ahead, the biggest demand in the protocol verification universe will continue to be high performance and timely availability of transactors and memory models. With more data being generated at the edge, design and verification teams need to increasingly look at processing data in the best ways, as soon and as fast as they can. The benefits of intelligent designs are enormous, and the role that fast emulation will play in verifying evolving interconnect technologies is expected to expand into new markets and applications in the near future.
We can already see the expanding role of functional protocol verification into the critical areas of security verification, adding another layer of complexity in finding vulnerabilities in the SoC that aggregates many IP protocols and processor subsystems into one complex HW/SW system. VIP looking for security risks will augment the verification arsenal of the customer by adding collateral that can run on their fast verification software and hardware platforms.
Our experts have shared their insights in a variety of verification-related blog posts. Catch up on some of them here: