From Silicon To Software

 

What’s New in PCIe 6.0—Beyond the Bandwidth

pcie 6.0 specification ip

By Priyank Shukla, Product Marketing Manager; Gary Ruggles, Product Marketing Manager; and Dana Neustadter, Product Marketing Manager, Synopsys Solutions Group 

We are in a golden age of innovation—cars of the future will deliver you from one appointment to the next, freeing up your hands and mind from driving. And soon, we will create new worlds of entertainment where you’ll no longer be a witness to a story, but an integral character from within it. Perhaps someday your family will sit around the holiday table together despite being on opposite sides of the earth, enabled through holographic images and haptics and technologies we haven’t even dreamed of yet. Increasing data bandwidth will be an important part of making it all happen. With the demands of advanced applications in high-performance computing (HPC), hyperscale data centers, artificial intelligence/machine learning (AI/ML), automotive, the internet of things (IoT), aerospace and military, and much more, the bandwidth demand curve is up and to the right. And one technology giving us a boost on our trajectory to meet the needs of the future is the new PCI Express® (PCIe®) 6.0 specification, including:

  • Data transfer rate to 64 GT/s per pin
  • Power efficiency via a new low-power state
  • Cost-effective performance
  • High-performance integrity and data encryption
  • Backwards compatibility to previous generations

Every successive generation of PCIe has doubled the bandwidth of the previous generation. PCIe 6.0 is no different. But in addition to increasing bandwidth, it’s the most significant PCI Express protocol innovation to date. Here is what you need to know.

New PAM-4 Electrical Signaling Modulation Scheme

One way that PCIe 6.0 accomplishes its leap forward in bandwidth is due to a shift in the electrical signaling modulation scheme, moving from the traditional non return to zero (NRZ) signaling to pulse amplitude modulation in four voltage levels (PAM-4) signaling. In previous PCIe generations, NRZ bits were transmitted serially as either a 1 or a 0 in each Unit Interval (UI) of time. With PAM-4, you get four values in the same UI as NRZ. So you double the data rate without doubling the signaling rate. The four voltage levels result in three eyes, with reduced eye height and eye width. To reduce errors in the signaling, it’s grey coded – meaning that only one bit changes at a time. For analog, pre-coding helps to reduce errors. And for digital, forward error correction (FEC) reduces the bit error rate.

But doesn’t all that equate to a significant increase in latency? Nope.

That’s because the Peripheral Component Interface Special Interest Group (PCI-SIG), the consortium that writes the specifications for the PCIe buses, came up with an elegant way to do a very lightweight FEC, leveraging the existing retry mechanism, so latency isn’t a problem. Compared to PCIe 5.0, PCIe 6.0 gives you a lot more bandwidth with little to zero increased latency.

Package Ingenuity – It’s All About the FLITs

With PCIe 6.0, the transaction layer concepts use the same commands as the previous generations. A new packet header format—while in the same spirit of earlier generations—is cleaner with a streamlined organization. But it’s the new method of packet delivery that brings about a complete restructuring of the protocol. And that restructuring not only supports the higher bandwidth but also your system’s ability to handle it with features such as shared flow-control credits. PCIe 6.0 uses flow control units (FLITs) to transfer data, eliminating the need for encoding schemes. In the past, with 2.5 G for instance, 8 bits of data would end up being 10 bits on the wire due to the encoding. With 8G, 128 bits B of data would be 130 bits B on the wire. FLITs, on the other hand, are not encoded at all. This means that for every 1 bit  of data, 1 bit ends up on the wire. The features and functions previously performed by the encoding in PCIe 5.0 are now covered by both the scrambling polynomial and the change to the FLIT headers in PCIe 6.0.

PCIe 6.0 Lanes Can Enter “Sleeping” State for Flexibility and Low Power

The new L0p is the required low-power state for PCIe 6.0. While lower speeds have backward compatibility with L0s, the low-power state of previous generations, the FLIT-mode rate of 64 GT/s requires L0p. What’s innovative about this new low-power state is that some of the lanes can enter a sleeping state, the equivalent of an electrical idle, while data can continue to transfer on the non-idling lanes and the re-timers that support FLIT mode are also required to support L0p. The benefit of L0p is that you can now scale your power with the bandwidth that you are actually using.

Securing the Data and the Systems

In our increasingly connected world, the attack surface for data and system breaches is expanding, along with, unfortunately, the incentives for attackers. Because of this, there are also more laws and regulations mandating greater security in electronic systems. Within this context, PCIe 6.0 has adopted data integrity and security protections, marked by a trio of security highlights:

  • Data object exchange (DOE)—This is not intended for high-performance use; rather than a performance mode, it’s a secure mode, a low-level building block that PCIe uses to enhance security in other areas. It is a simple mechanism for transferring mostly cryptographic data and keys based on configuration space registers, and it is tightly bound to the application logic.
  • Component measurement and authentication (CMA) — With this security feature, the firmware in the device will devise a cryptographic signature for the device. When engineers receive a CMA report they can verify that the signature is accurate. If it’s not, they have a security issue that needs addressing.
  • Integrity and data encryption (IDE)The focus of this security measure is to protect against physical access attacks. The protection is against observation of the PCIe 6.0 FLIT packets as well as packet insertions and deletions. There are two modes within this security protection, Link IDE where data is encrypted at the transmitter and decrypted one hop later at the receiver. The second is selective IDE where the packets pass through switches, encrypted at the requester and decrypted after several hops at the completion of the request. Because this mode of security is at the packet level in the “guts” of PCIe, it needs to be tightly coupled with your controller to implement encryption and decryption capabilities very efficiently at 64 GT/s with minimum latency impact. And, you will need multiple pipelined AES-GCM cryptographic engines to meet the throughput requirements.

The main difference between PCIe 5.0 and 6.0 security features is scaling for the bandwidth, supporting the FLIT mode and supporting the new packet header format. There are a few additional security-specific features on the horizon that will support both PCI 5.0 and 6.0. So, the security will continue to evolve as the security landscape changes.

Solid-State Drives Are Among the Early Adopters – Here’s Why

While PCI 4.0 is rolling out and 5.0 will be coming soon, among the early adopters of PCIe 6.0 are solid-state drives (SSDs). If you look inside the box of a rack unit in the figure below, for instance, the CPUs interface with the accelerators and the SSDs while the accelerators interface to smart network interface cards (NICs). All of these are PCIe slots. In the transition from PCIe 5.0 to 6.0, the U.2 form factor will be phasing out and PCIe 6.0 will most likely support U.3, EDSFF (Enterprise and Datacenter Standard Form Factor), and Open Compute Project (OCP) 3.0. Because SSD SoCs interface with both non-volatile memory express (NVMe) or Flash memory as well as root complex processors, the bandwidth requirement is quite high. But SSDs are limited by the bandwidth in the SSD socket, which is gated by the PCIe data rate. That means that getting double the bandwidth in the same lane is an immediate win for SSDs and a reason for the early adoption, which is helping to drive the market. And already the ecosystem is being built for the root complex processors.

pcie 6.0 ssd ip
PCIe Is the defacto interface in hyperscale data center rack unit boxes. Here’s an example inside the box (compute): PCIe is the dominant interface with applications for CPU, GPU, SSD, accelerators, and smart NICs, and cache coherency with CXL.      

Open Compute Project (OCP), driven by Meta, is developing a versatile form factor that can be used for all interfaces. While NICs, SSDs, and other components have always had their own form factor, OCP has the vision of using a common form factor for all these interfaces. Companies who play into the Meta ecosystem are developing devices using the OCP 3.0 form factor, which will be supported by PCIe 6.0.

Your Trusted PCIe 6.0 IP Partner

When you are ahead of the curve in adopting the latest standard such as PCIe 6.0, it’s important to choose an experienced partner for your IP. With the most PCI-SIG certifications of any IP vendor, Synopsys has the industry’s leading PCIe and security experts on staff, including key contributors to the PCI-SIG specification—both working group members and a member of the board. In fact, we have the most widely deployed PCIe 5.0 solutions, recently achieving the sale of more than 250 PCIe 5.0 licenses and passing the PCIe 5.0 compliance testing for both hosts and devices. And because of our long history developing PCIe solutions, you can rest assured about backward compatibility with earlier versions of the specification.

Our offerings include the Synopsys Controller IP for PCIe 6.0 with MultiStream architecture tightly integrated with the Synopsys IDE Security IP Module for PCIe 6.0. It includes multiple interfaces for the lowest latency and maximum throughput. Our Synopsys PHY, available on FinFET processes, optimizes digital equalization through adaptive digital signal processing (DSP) algorithms for efficient power across backplanes, NICs, and chip-to-chip channels. And, our Verification IP uses native System/Verilog UVM architecture for acceleration of testbench development and has a built-in verification plan, sequences, and functional coverage. Another example of our experience with the new features is our CXL IP, which also implements FLIT mode. In the end, though, it’s our long history of success with PCI Express that is at heart of our credibility—it’s how we lower your risk for PCIe 6.0 adoption, so you can a get jump start on the future.

In Case You Missed It