CXL 3.0 Introduction Compute Express Link™ (CXL™) 3.0 is an open standard that defines high-speed cache-coherent interconnect and memory expander interconnect for CPU-to-device and CPU-to-memory connections. It is built on PCI Express® (PCIe®) 6.0 r1.0 specifications and leverages PCIe for physical and electrical interface. Artificial Intelligence (AI) and Machine Learning (ML) applications and widespread smart devices (e.g., autonomous vehicles) are driving factors behind exponentially rising requirements to build high-performing data center units that involve CPUs connected with accelerator processors, memory attached devices, and SmartNICs. These systems demand low latency requirements for CPU-attached devices to perform compute-intensive operations on massive data while maintaining coherency. To meet the increasing performance and scale requirements of these systems, the CXL Consortium has evolved its standard through the introduction of CXL 3.0. CXL 3.0 Specification Highlights
Need for Multi-die Chiplets Interconnect
Looking for a way to reduce effort defining and tracking functional verification goals in your Memory Controller/PHY and Subsystem Verification Project?
HDMI (High-Definition Multimedia Interface) is the most popular medium for transporting both audio and video information between two digital devices. In the past two decades, HDMI technology has evolved from HDMI 1.0 to HDMI 2.0. In 2017 HDMI 2.1 introduced enhanced gaming and media features such as Variable Refresh Rate (VRR) and Auto Low Latency Mode (ALLM) to eliminate lag, stutter, and tearing, adding smoothness to the gaming and video experience. Recently the HDMI Forum has announced a new version, HDMI2.1a, that brings a standout gamer-friendly feature, Source-Based Tone Mapping (SBTM).
PCI-SIG® recently released the latest revision of the PCI Express® specification PCIe® 6.0. With 64GT/s raw data rate physical layer enabling up to 256 GB/s data transfers via 16-lane configuration. With this announcement PCIe continues to meet the industry’s need for high-bandwidth and low latency interconnect, whose potential could be leveraged by dependent storage (NVMe), and coherency (CXL) protocols.
In this era of technology revolution, there is a continuous progression in domains like AI applications, high end servers, and graphics. These applications require fast processing and high densities for storing the data, where High Bandwidth Memory (HBM) provides the most viable memory technology solution. Our previous memory blog HBM2 memory for graphics, networking and HPC explored this protocol with data transfer rate of 2GT/s with stacked architecture of 8-Hi stacks (8 die).The HBM2-extension (HBM2E) architecture provided further improvement on top of HBM2 with 3.2 GT/s transfer rate and 12-Hi stack architecture with individual die density upto 8Gb and overall density of 24GB.
Welcome to the wonderful and cryptic world of secured traffic with CXL being the latest specification to adopt it. As attacks on high-performance data centers become more sophisticated, the security standards must continuously adapt to better protect sensitive data and communications and ultimately protect our connected world. To this end, the CXL standards organization added the security requirement of Integrity and Data Encryption (IDE) to the CXL 2.0 specification.
Data is the new fuel powering critical use-cases for cloud /edge computing, and advances in AI. All aspects of data handling – gathering, storing, moving, processing, and dispersing – pose unique design implementation and verification challenges. The need for heterogenous computing has given exponential rise to application specific accelerators, pushing the industry to come up with a solution for efficient data handling and resource utilization. CXL is a processor interconnect protocol designed to support high bandwidth, low-latency interface from CPU to workload accelerators, maintaining memory coherency across heterogeneous devices, while addressing security needs of the user.
Emerging technologies such as Internet of Things (IoT), 5G, Automotive, Artificial Intelligence (AI), and High-Performance Computing, have given rise to potentially transformative trends demanding the need for faster memory access. 5G brings with itself the ability for faster download and upload speeds, making high-speed real-time data transfer possible. All the fancy smartphone processors have inbuilt cutting-edge features like high resolution multimedia processing, faster Machine Learning (ML) computations, Image processing capabilities and faster frame rates for all you gaming freaks. But don’t forget underlying all this, is the need for faster memory, AI/ML requires higher bandwidth to support faster processing of massive data.