By Scott Knowlton, Director, Strategy & Solutions, Synopsys Solutions Group
High Performance Computing (HPC) was a hot topic long before the world was turned on its axis by COVID-19. The latest technology requirements in HPC are being driven by enormous computing power and higher throughput networks to handle increasingly large data sets from applications in artificial intelligence, facial recognition, autonomous driving, machine learning, 3D printing, etc.
Now throw in a worldwide pandemic, where everyone is using online video conferencing at all hours of the day for work meetings and personal catch-ups, binge-watching the next big Netflix hit and setting up the kids in front of their video game console of choice to alleviate boredom.
As you can imagine, it’s putting tremendous stress on the world’s existing cloud infrastructure at a quicker rate than was expected.
Before we take a deeper dive into how demand for HPC is growing, bear with me as I go back to the basics for a moment.
What is HPC?
You might be hard-pressed to come up with a definition of HPC off the top of your head, but most people instinctually grasp the concept.
insideHPC defines it as “aggregating computing power in a way that delivers much higher performance than one could get out of a personal computer in order to solve large problems in science, engineering, or business.”
HPC enables faster network performances, more disk storage and requires a substantial amount of energy to power advanced technology that has the potential to improve our health and well-being, quality of life at home and while we work, and so many other important concerns we are all experiencing right now.
HPC has many different forms and can be found in a variety of places, from local data centers to those in the cloud and at the edge (the physical location where devices connect with the digital world). It plays a big role in hyperscale cloud data centers, edge computing, Internet of Things and more.
Back to Your Regularly Programmed COVID-19 Content
As I mentioned, there has been a massive shift to online services for businesses, entertainment, gaming and schools due to shelter-in-place orders necessitated by COVID-19. Even before this accelerated demand, local data centers were moving to the cloud using services from Amazon Web Services (AWS), Microsoft Azure and Google Cloud. According to estimates from Gartner, 80% of enterprises will move away from their traditional data center by 2025 versus 10% today.
Now, with the COVID-19 effect, we’re seeing that shift in real time.
Just take a peek at Microsoft’s data alone:
Demand for Microsoft’s Xbox gaming platform included a 50% increase in multiplayer gameplay and a 30% increase in peak concurrent usage.
From March to April 2020, Azure had to add 110 terabits of capacity and 12 new edge sites.
As part of the COVID-19 HPC Consortium, AWS has made public a COVID-19 data lake that houses up-to-date and curated datasets such as COVID-19 case tracking data, hospital bed availability and over 45,000 research articles.
Google Cloud added a new HPC tool to its arsenal with the beta launch of Filestore High Scale. According to an article published by ZDnet, “Filestore serves workloads that need high performance and capacity such as electronic design automation (EDA) video processing, genomics, manufacturing and financial modeling.”
At this point you may be thinking, but where do semiconductors come into play? I’m so glad you asked.
The HPC Engine: Semiconductors
With increased HPC demand comes the need for new semiconductor design starts and innovation.
The semiconductor market in data centers is expected to reach $177 billion in 2027, while data traffic is projected to be at 330 zettabytes in 2030 according to the 2019 IBS Global Semiconductor Market Report.
Re-architecting the cloud data center to support the latest applications is driving the next generation of semiconductor SoC designs that support new high-speed protocols and optimize data processing, networking and storage in the cloud.
Companies are having to navigate a ton of critical metrics, including memory capacity, bandwidth, power efficiency, latency, SoC area, beachfront and RAS (reliability, availability and serviceability). On top of that, they are having to figure out their own secret sauce and keep up with the innovation of the semiconductor foundries.
One size certainly doesn’t fit all.
Our customers are working on advanced processes from 7 nm, 5nm and below, requiring a wide range of high-quality, design and verification solutions, and silicon-proven IP. SoC designers for cloud computing applications need a combination of high-performance and low-latency IP solutions to help deliver total system throughput while minimizing their risk as they push to meet super aggressive schedules.
Synopsys provides a comprehensive portfolio of high-speed SerDes PHY IP (112G/56G Ethernet and 112G USR/XSR die-to-die) to help designers meet their long and short reach connectivity for hyperscale data center, networking and AI applications. Designers can select from a portfolio of high-bandwidth, power-efficient memory interfaces supporting DDR5, LPDDR5, and HBM2/2E IP solutions to meet their memory capacity and bandwidth requirements. DesignWare IP for PCIe 5.0 and CXL offer low-latency, low risk solutions for requirements such as fast chip-to-chip interconnects and cache coherency.
To learn more about how to accelerate your HPC SoC designs, including the new frontier of die-to-die interface IP, watch our latest webinar series.