High-Performance Computing & Cloud Predictions for 2021 

Synopsys Editorial Staff

Dec 15, 2020 / 7 min read

There’s never a dull moment in the high-performance computing (HPC) and cloud computing arenas. After all, these technologies power some of the latest innovations in artificial intelligence (AI), facial recognition, autonomous driving, 3D printing, and more.

Cloud has also risen in importance for the everyday consumer this year with an increased need for video conferencing and remote data access during the work-from-home and online-learning shift. At-home entertainment needs such as increased streaming bandwidth for content platforms like Netflix and video games have accelerated due to shelter-in-place orders.

A growing number of semiconductor companies responsible for building high-performance computing chips that power today’s advanced datacenters are themselves using HPC hardware to design their products. This also requires EDA applications to take more advantage of the scalability and elasticity of high-performance computing available on the cloud.

As the world adjusts to a new normal, those in the HPC and cloud industries continue to adapt to the new needs of both the current-day and a post-pandemic world. Here, we’ve gathered the top predictions from Synopsys thought leaders, who share how they think 2021 will play out for the overall HPC and cloud industries and how Synopsys plans to support them.

Top High-Performance Computing and Cloud Predictions for 2021

How COVID-19 Has Affected HPC

Let’s start at the beginning. While no one could have predicted a global pandemic in 2020, here’s our predictions for how COVID-19 will continue to shape the HPC industry.

 

“Reducing latency will continue to grow in importance as more people opt to work from home and continue distance learning in a post-COVID era. The ability to interact in a near real-time fashion is particularly important to making our remote interactions as natural and productive as possible. There’s already been a lot of development around addressing latency, including the sheer processing power that’s been incorporated into compute devices to increase performance. But other technologies, the introduction of 400 Gigabit Ethernet, for example, have allowed us to move data much more quickly as well,” said Scott Durrant, Strategic Marketing Manager, Synopsys Solutions Group.

 

“We’re seeing a lot of growth in the HPC/cloud market with everybody working at home. The demands on the cloud and the data centers have gone up tremendously, and the mega disruptors are moving to address that. One big trend is that the performance of systems and getting data through the system has become increasingly important. And that’s pushing the performance of server chips, it’s driving the clock speeds to go way up, and it’s pushing to smaller geometry technologies at a pretty rapid rate. All of these contribute to making the designs more complex. We’re also seeing an aggressive push in a number of technologies, such as the latest generations of chip-to-chip interconnects like PCIe and CXL, to improve overall system throughput using higher speeds and cache coherency,” said Scott Knowlton, Director, Strategy & Solutions, Synopsys Solutions Group.

 

“HPC has been on a growth trajectory over the past five to ten years, especially when AI came into the mix, and COVID-19 has accelerated it even further. For medical vaccine research, you need a combination of systems like high-performance computing and AI. Scientists around the globe are speeding up vaccine discovery by applying modeling, simulation, machine learning, and analytical capabilities to vast volumes of data to accelerate insights and discoveries. The massive compute power of HPC is needed to run the complex mathematical models and transform them into simulations. By combining this with AI and ML, we are getting closer and closer to more accurate simulations, which, in turn, gets us to a vaccine faster,” said Susheel Tadikonda, Vice President of Engineering, Synopsys Verification Group.

 

“The infrastructure needed to work from home is growing due to the global pandemic; however, companies are trying to capture this immediate need for a better network infrastructure, and don’t have time to wait for 3nm nodes to mature and become more cost-effective. Once we can look beyond COVID-19, you’re going to see companies put more long-term investments in place that will accelerate these new process nodes,” said Ruben Molina, Director of Product Marketing, Digital Design Group.

New Applications for HPC and Cloud

Many people think of super computers doing amazing things like predicting weather patterns and mapping the human genome when the topic of HPC is brought up. Our Synopsys experts expect to see HPC and cloud used for many different kinds of applications, both large and small, in the coming years.

 

“The COVID-19 consortium that has come together to do medical research to find ways of treating or preventing or curing COVID-19 is one example of a door that will continue to open in the medical field. The power of HPC and cloud will be harnessed to help researchers better collaborate, understand diseases, and ultimately treat them. This technology will reduce the number of human and animal trials that will take place in the medical research field via more powerful compute mechanisms that can simulate the impact of drugs on the human body under various circumstances and conditions,” said Durrant.

 

“We’re going to see much more compute power being pushed to where the actual data is being received, sometimes called the ‘edge.’ For example, in autonomous driving, an automobile must take in enormous amounts of data and make decisions very quickly. There isn’t time to wait for the information to be sent to a compute server for processing; instead, it needs to be processed at the edge. We will also see this increase in compute power at places like manufacturing facilities. Instead of sending data to a centralized computer to monitor the health and reliability of machines on an assembly line, it will be processed at the edge. The reduction on latency means that you have a much greater chance of detecting a potential failure earlier and can prevent downtime, which is very costly to manufacturers, especially those who are making thousands of parts in just a few minutes,” said Molina.

The Biggest Design Challenges HPC Engineers Will Face in 2021

With all these exciting new applications and current needs to address the increased bandwidth for stay-at-home workers come design challenges for the engineers that design the silicon powering HPC and cloud technologies. Here are some of the top design challenges that engineers will face next year and beyond.

 

“Because a lot of data is stored in centralized compute farms today and probably will be even into 2021, it is susceptible to attack. Hackers know where the information is because it’s not spread out across a million devices, so security is going to be a big issue for both hardware and software. That’s why Synopsys is working with government agencies like DARPA to ensure secure hardware design, which will eventually be utilized in more consumer-focused industries such as banking that have a big need for security,” said Molina. “Chips are going to get bigger and they’re going to require more performance. There are a couple of things that can potentially limit that; one is, how much logic can you possibly fit on one die, and the other is, how do we design things of that scale? To help overcome design scale on a single die, designers are looking at 3DIC which disaggregates design into multiple integrated chip designs. This means that from the very beginning, designers will need to do even more early floor planning and package-based signal integrity analysis using tools like Synopsys’ 3DIC Compiler. In terms of handing growing single die design sizes, designers need tools like the Fusion Compiler that allow them to operate on an ever-increasing number of compute cores, which lends itself to usage in a cloud environment. In a cloud environment, you have access to literally thousands of compute resources. If your tools aren’t set up to be able to run across all of those compute resources, their value to designers will be limited from an overall performance and time-to-market standpoint.”

 

“We’re seeing silicon geometries continue to shrink, which creates both challenges and opportunities. These reduced geometries come with a cost, so striking a balance that will yield an economic benefit as well as implementing these new architectures in such a way that they can get the most impact from development effort is going to be an ongoing challenge,” said Durrant.

 

“There are new technologies that have been brought forward to address the massive amount of data transfer that’s happening in AI applications for actions like image recognition, such as Compute Express Link (CXL). While in the past, you would have to transfer all the data from the memory, we’re going to see more use of cache coherent technology to leave most of the data in its original location to work on and only transfer the data that’s absolutely necessary. This will increase the bandwidth of these connections on one hand, while also reducing the amount of traffic that’s going across that same channel to improve the overall performance,” said Knowlton.

 

“HPC silicon is getting more complex with chips exceeding 10BG in size, along with multi-die and chiplet architectures. Chiplets, which allow designers to mix and match IPs from different versions/generations, pose an integration challenge. IP verification is not just a block-level exercise anymore; it’s verifying the IP in a system context (e.g., IP sees real-world stimulus early on). System-level hardware and software verification will become even more important as we bring up software on these chiplet/multi-die platforms. The number of microcontrollers and firmware sitting in these platforms that need to work together to get to, for example, a system boot is quite challenging. Hybrid engine solutions are a clear way to approach this. This also leads to system-level debug and performance challenges when designers need to analyze and understand system and workload behaviors across multiple levels of abstraction (i.e., OS, driver, firmware, hardware interface, busses, etc.). Another area of importance is early power analysis and estimation; any slight saving on power is monumental for large designs. Synopsys is working along with chip manufacturers to help solve these verification challenges,” said Tadikonda.

 

Stay tuned for upcoming predictions posts that will outline our 2021 predictions for artificial intelligence (AI), low power, and automotive.

Continue Reading