In this era of technology revolution, there is a continuous progression in domains like AI applications, high end servers, and graphics. These applications require fast processing and high densities for storing the data, where High Bandwidth Memory (HBM) provides the most viable memory technology solution. Our previous memory blog HBM2 memory for graphics, networking and HPC explored this protocol with data transfer rate of 2GT/s with stacked architecture of 8-Hi stacks (8 die).The HBM2-extension (HBM2E) architecture provided further improvement on top of HBM2 with 3.2 GT/s transfer rate and 12-Hi stack architecture with individual die density upto 8Gb and overall density of 24GB.
Welcome to the wonderful and cryptic world of secured traffic with CXL being the latest specification to adopt it. As attacks on high-performance data centers become more sophisticated, the security standards must continuously adapt to better protect sensitive data and communications and ultimately protect our connected world. To this end, the CXL standards organization added the security requirement of Integrity and Data Encryption (IDE) to the CXL 2.0 specification.
Data is the new fuel powering critical use-cases for cloud /edge computing, and advances in AI. All aspects of data handling – gathering, storing, moving, processing, and dispersing – pose unique design implementation and verification challenges. The need for heterogenous computing has given exponential rise to application specific accelerators, pushing the industry to come up with a solution for efficient data handling and resource utilization. CXL is a processor interconnect protocol designed to support high bandwidth, low-latency interface from CPU to workload accelerators, maintaining memory coherency across heterogeneous devices, while addressing security needs of the user.
Emerging technologies such as Internet of Things (IoT), 5G, Automotive, Artificial Intelligence (AI), and High-Performance Computing, have given rise to potentially transformative trends demanding the need for faster memory access. 5G brings with itself the ability for faster download and upload speeds, making high-speed real-time data transfer possible. All the fancy smartphone processors have inbuilt cutting-edge features like high resolution multimedia processing, faster Machine Learning (ML) computations, Image processing capabilities and faster frame rates for all you gaming freaks. But don’t forget underlying all this, is the need for faster memory, AI/ML requires higher bandwidth to support faster processing of massive data.
With the release of HDMI 2.1, higher video resolutions and refresh rates including 8K@60Hz and 4K@120Hz are a reality. In a previous blog, 10K Resolution at 120Hz Display: A Reality Today with DSC 1.2 in HDMI 2.1, we explained how HDMI 2.1 can support resolutions and refresh rates of the order 4K@240Hz, 8K@120Hz and 10K@120Hz with display stream compression (DSC). With increased resolution you get finer details and with higher refresh rate the moving content feels smoother. But it also means more pixel information and thus higher data transmission rate, higher bandwidth, and higher power consumption. What if there is a way to reduce the transmission rate while keeping the resolution and refresh rate intact? The answer lies in the reduced blanking feature in which the blanking region of a frame is reduced significantly.
The Compute Express Link (CXL) 1.1 and CXL 2.0 specification differ in the way memory mapped registers are placed and accessed. The CXL 1.1 specification places memory mapped registers in RCRB (Root Complex Register Block) while the CXL 2.0 specification links memory mapped registers in BAR (Base address ranges) of the device. In this blog we will focus on how to access CXL 2.0 specification memory mapped registers.
Coherent Hub Interface, popularly known as CHI, is an Interface specification that is part of 5th generation of AMBA® protocols (AMBA® 5) from Arm, released in 2013. AMBA® 5 CHI defines the interfaces for connection of fully coherent processors and dynamic memory controllers, to high performance non-blocking interconnects.
Verification consumes most of the compute resources in a typical data center for a semiconductor design company. Simulation comprises one of the largest, if not the largest, workloads in this mix. To maximize the likelihood of first silicon success, development teams often increase the volume of simulation jobs they run in preparation for tape-out. However, this effort is often limited by the compute resources that customers can bring to bear on the task.
SoC designs are growing more complex, not just by the sheer number of transistors that can be packed into one design, but the emergence of different interconnect methods you must use to connect chip internals and to connect to the outside world. Becoming an expert on each of the interconnect protocols is not going to shorten the verification schedules, reduce design productivity and expose design bugs that might only be found when used by the end consumer.
Verifications account for almost 70% of the time and resources consumed during chip development. Moving some or even all of logic simulation to cloud allows customers to free up valuable on-premises resources for other workloads. The deployment of Synopsys’ functional verification solutions on the AWS cloud platform enables accelerated development and verification of breakthrough connectivity technology and SoCs. AWS cloud enables users to take advantage of elastic infrastructure resources to address the increasing capacity requirements for semiconductor simulations.