Hyper-Convergent Chip Designs: All or Nothing Approach 

Mike Gianfagna

May 18, 2021 / 5 min read

The term “hyper-convergent” is gaining momentum. It is a popular term in the fields of high-performance computing (HPC) and software infrastructure. I want to focus here on another use of the term – one that gets to the fundamental infrastructure that enables things like HPC and advanced software infrastructure. I’m talking about the semiconductor devices that make all this possible. Let’s explore why hyper-convergent designs demand an all or nothing approach.

This is a good news/bad news story. Let’s start with the bad news.

Those who are involved in the semiconductor industry will immediately know what Moore’s Law is. Even interested observers may have heard of it as semiconductor design and manufacturing has become a hot topic these days. Moore’s Law isn’t really a law in the legal sense or even a proven theory in the scientific sense (like E=MC2). Rather, it was an observation by Gordon Moore in 1965 while he was working at Fairchild Semiconductor. Mr. Moore made the simple, but profound observation that the number of transistors on a microchip (as they were called in 1965) doubled about every two years.

Gordon Moore went on to co-found Intel Corporation and his observation became the driving force behind the semiconductor technology revolution. The doubling phenomena was subsequently adjusted to every 18 months, but the exponential nature of Moore’s Law continued. It created decades of significant opportunity in semiconductor design. Due to the nature of semiconductor manufacturing, moving an existing design to the newest process would yield a denser chip, which reduced cost. It also yielded lower power consumption and higher performance thanks to the smaller transistors. So, many companies rode the Moore’s Law wave and became very successful.

The semiconductor industry worked this way for quite a while. And then something started to happen. Here comes the bad news.  Semiconductor process technology became really complex. Transistors evolved into three-dimensional devices with all kinds of counter-intuitive behaviors. Moore’s Law still “worked,” but the benefits of moving to the next process node started to slow down. Since the whole industry was used to an exponential increase in density and performance, this presented a real problem.

Hyper-Convergent Designs

Welcome to the SysMoore Era of Chip Design

Now for the good news.

It turns out the semiconductor industry is full of very smart people. Unfazed by this slowing of the “automatic innovation” afforded by Moore’s Law, the industry began innovating the old-fashioned way, with new architectures. Things like parallel algorithms and new approaches to computation, often assisted by artificial intelligence techniques. At the hardware level, something fundamental happened as well. The huge, single-chip approach to design started to be replaced with multiple pieces of silicon, each with a specific purpose and all integrated in one package using new and very dense integration techniques.

In this new era of semiconductor design, the scale complexity fueled by Moore’s Law was enhanced with systemic complexity resulting from this massive convergence of technology inside one package. We call this new phase of semiconductor growth the SysMoore era, as our co-CEO, Aart de Geus, introduced in his keynote at our recent SNUG World. SysMoore encapsulates the continued benefits of Moore’s Law with the new benefits of systemic integration. The diagram below depicts these megatrends. Exponential growth cannot and will not be stopped.

SysMoore vs. Moore's Law chart | Synopsys

This brings me to the topic at hand, hyper-convergent design. In the context of the SysMoore era, hyper-convergent designs represent a new class of semiconductors that integrate multiple technologies, multiple protocols, and multiple architectures into one massive, highly complex, and interdependent design. That last term, interdependent, is important here and informs the tremendous challenge these hyper-convergent designs present. To get at this challenge, let’s first look at how hyper-convergent design practices evolved.

Pre-SysMoore SoC on PC Board | Synopsys
SysMoore Hyper Convergent System | Synopsys

As you can see, there is a lot of technology packed into hyper-convergent designs. Designing massive SoCs was hard and so are hyper-convergent designs. The complexity and design challenges manifest in different ways, however. For starters, these designs contain technology from many different sources, so a well-coordinated and best-in-class ecosystem is required. That was true for massive SoCs as well, there are just more players now.

The real challenge comes during design and verification of this class of design. In the pre-SysMoore era, analysis of things like signal integrity, power consumption, heat dissipation, and performance could be done independently. Each of these items had a specific impact on a particular part of the SoC or a component on the PC board. Due to the distances and signal paths, interactions between components were small.

EDA’s Critical Role in Hyper-Convergent Chip Designs

In the SysMoore era, hyperconvergence has created much shorter signal paths and substantial proximity effects. Power impacts signal integrity and signal integrity impacts timing. Heat dissipation is impacted by many of these effects as well. To achieve design success for a hyper-convergent design, allof these effects need to be analyzed together in a holistic way.

This represents a substantial departure from traditional design flows and design methodology. The good news here is that a critical enabler for the semiconductor industry is the electronic design automation, or EDA, industry. The EDA industry provides the software and hardware solutions needed to design near-impossible advanced semiconductors. And like the semiconductor industry, the EDA industry is full of very smart people who have figured out how to tame hyper-convergent design problems.

What is needed is a well-integrated and orchestrated set of best-in-class products. With the right “secret sauce,” these solutions can now perform simultaneous analysis of hyper-convergent designs across all the dimensions of interest. Just as important is the ability to present the results of this analysis in a unified way. Do not attempt this by yourself. Cobbling together your familiar products and trying to consolidate results won’t work. You will miss subtle and important interactions in your hyper-convergent design. What you don’t know can hurt you. In this case, it will take the form of a longer schedule and a costlier project. If the problems continue to mount, it may be much worse than that.

So, before you embark on a SysMoore era hyper-convergent design, make sure your solutions and design flow are hyper-convergent design friendly. At Synopsys, we’ve been studying this problem for quite some time and we have solid technology to tame these daunting challenges. You can learn about hyper-convergent design flows here. One example of a recently announced hyper-convergent friendly design solution can be found here. So, this is why hyper-convergent designs demand an all or nothing approach. Make sure you find the right technologies to get it all.

Continue Reading