VIP Central

 

De-mystifying CXL: An overview

As Data Center and Artificial Intelligence applications take center stage , last few years have seen the advent of various high bandwidth interconnect technologies. Compute Express Link (CXL), is an aspiring new interconnect technology for high bandwidth devices such as accelerators with memory, high density compute cards, and GPU comprised accelerators. The specification is defined by CXL Consortium https://www.computeexpresslink.org/. Synopsys has developed a comprehensive ­­CXL verification subsystem, being already used by Early Adopters planning to release their first CXL applications. CXL verification subsystem leverages industry popular Synopsys PCI Express Verification IP. Synopsys recently introduced Industry’s first CXL IP solution. For more details refer Synopsys Delivers Industry’s First Compute Express Link (CXL) IP Solution for Breakthrough Performance in Data-Intensive SoCs.

CXL is a technology which enables high bandwidth, low latency link between Host (typically CPU), and Device (typically Accelerators with memory attached). CXL stack designed for low latency, uses PCIe electricals, and standard PCIe form factors for the add-in card. CXL uses flexible processor port that can auto-negotiate to either the standard PCIe transaction protocol or the alternate CXL transaction protocol.

CXL Specification is built upon well-established PCIe infrastructure and leverages its layer based architecture, with each layer having a target role.

  Figure1 : CXL Layered Architecture & Verification Requirements

  • CXL Transaction Layer
    • CXL Transaction layer is divided into PCIe/CXL.io Transaction layer and CXL.cache+CXL.mem Transaction layer. CXL.cache+CXL.mem Transaction layer supports functionality for generating Requests, Response and Data.
  • CXL Link Layer
    • CXL Link layer is divided into PCIe/CXL.io Link layer and CXL.cache+CXL.mem Link layer. Link layer is an intermediate layer between Transaction layer and Physical layer. It helps maintain reliability of transactions across the link.
  • CXL ARB/MUX
    • CXL ARB/MUX provides arbitration and multiplexing of CXL.io and CXL.cache+CXL.mem traffic, towards the Physical layer.
  • CXL Physical Layer
    • The Physical layer consists of logical sub-block and electrical sub-block. The logical sub-block initially operates in PCIe mode, and switches to CXL mode based on alternate protocol negotiation. The electrical sub-block always follows PCIe specification.

Fig: Type of Traffic flows supported in CXL

 

For CXL traffic, data rates are aligned with those defined by PCIe Specification. In CXL mode, data rates of 8 GT/s, 16 GT/s or 32 GT/s are supported. In CXL mode, link width of x16, x8, x4, x2 are supported. Link width of x1 is also supported as degraded mode.

The explosion of data and rapid innovation in AI & encryption have given birth to GPU accelerators with the need of a high-performance connection to the processor. While there exist other interconnect protocols, CXL is unique in delivering CPU/device memory coherence, reduced device complexity and industry’s standard physical and electrical interface bundled in a single technology for the best plug and play experience.

Synopsys is the market leader for IP and VIP for CXL, contributing significantly to the evolution of the CXL ecosystem. Stay tuned, subsequent blogs on CXL will talk about CXL transaction types and layered architecture and the verification requirement/challenges of CXL designs in more detail. For more information, please visit http://synopsys.com/vip.