The mighty silicon chip is being asked to do more and more these days. It’s not surprising, given that our electronic devices are being infused with greater intelligence and connectivity. For many demanding applications—such as AI, hyperscale data centers, and autonomous vehicles—monolithic SoCs are no longer enough. This is driving demand for multi-die systems, in which multiple dies, or chiplets, are integrated into a single package.
Multi-die systems are massive and complex, to be sure, but they are also providing an answer to the slowing of Moore’s law, addressing the challenges of systemic complexity. Given all their interdependencies, these systems must be developed from concept to production holistically to achieve optimal power, performance, and area (PPA). While the steps to reach tapeout are similar to those for their monolithic counterparts, the process must be approached from a comprehensive system perspective.
How can you be sure your multi-die system will perform as intended? And do so efficiently? From design exploration through in-field monitoring, what are the key steps in between that you should consider from a system standpoint?
Techniques that work well for monolithic SoCs may not be as well suited for multi-die system architectures. Fortunately, the ecosystem to support multi-die systems is quickly maturing, providing design teams with the tools to achieve the advantages that these systems bring:
There are a couple of popular ways to approach a multi-die system architecture. One is disaggregation, where a large die is partitioned into smaller dies to improve system yield and cost compared to monolithic dies. This approach can be applied to heterogenous and homogenous designs. Another common approach is to assemble dies from different process technologies to optimize system functionality and performance. Such a system may contain dies designated for digital compute, analog, memory, and optical compute, each at a process node ideal for its target functionality. In the long run, fabricating a design consisting of multiple, smaller dies versus a large, monolithic SoC will produce better results in terms of yield.
The emergence of advanced packaging technologies, such as silicon interposers, redistribution layer (RDL) interposers, and hybrid bonding—has paved the way for multi-die systems. Industry standards like HBM3 for high-density memory and UCIe for secure die-to-die connectivity are also key enablers, ensuring quality, consistency, and interoperability.
Another piece of the puzzle lies in the design and verification flow. In the 2D design world, teams commonly work on their portion of the design and then turn their results over to the next team in line. For multi-die systems, teams should address the challenges together, analyzing parameters such as power consumption, signal integrity, proximity effects, and heat dissipation with respect to each other.
EDA companies need to think broadly as well when developing their tool flows. A scalable, cohesive, reliable, and comprehensive multi-die system solution can enhance productivity while enabling teams to meet their time-to-market and PPA targets. To learn more about how steps including architecture exploration, system implementation, die-to-die connectivity, software development and software/hardware validation, verification, signoff, silicon lifecycle management, and testing can be addressed from a multi-die system perspective, read my new paper, “How Multi-Die Systems Are Transforming Electronic Design: A Comprehensive Approach for Integrating Heterogeneous Dies.”
If you’re ready to start developing your multi-die system, take a look at the Synopsys Multi-Die System Solution. With scale and systemic complexity on the rise, multi-die system architectures provide an avenue for the continued innovation that our smart everything world demands.