By Stelios Diamantidis, Sr. Director, Synopsys AI Solutions
Can AI design AI chips? If the question sounds like science fiction to you, you got it half right. The answer lies in the science, but it is no longer the stuff of fiction. In fact, AI designing AI hardware was the focus of Synopsys Co-Chairman and CEO Dr. Aart de Geus’s keynote delivered at the AI Hardware Summit 2021 held in September.
While de Geus’s answer (hint: it was a resounding yes!) was the tip of the spear for the whole three-day AI hardware extravaganza, the fact that AI is used to design AI chips today, doesn’t mean we can all go home. There’s a lot of work to do, requiring continued innovation from engineers. And, much of that work was covered in deep-dive conversations that ensued throughout the summit, delivering a rich tapestry of takeaways for attendees.
As a sponsor for the event, Synopsys also participated in talks and panel discussions, demos, technical presentations, a roundtable, and a webinar. Here are the top four highlights:
In his keynote talk, Can AI Design AI Chips?, de Geus specifically covered transformative architecture innovations behind exponential growth of cloud-to-edge intelligence. We have entered a new era of systemic complexity without bound—the ‘SysMoore’ era. The SysMoore era is driven by a strong techonomic pull for 1,000x more powerful silicon solutions. What force will deliver the techonomic push to tackle systemic complexity? de Geus pondered this briefly, sharing a series of impressive results achieved by Synopsys’ new breed of autonomous design technologies. AI can design AI chips and the future is much closer than you may think…
In the panel discussion From App to Silicon: Personalizing AI Hardware, I had the pleasure of joining Steve Oberlin, CTO for Accelerated Computing at NVIDIA, and Karl Freund, founder and principal analyst at Cambrian AI Research, to discuss the future of application-specific cognitive functions and how AI hardware development can keep up with trillion+ parameter models and 1,000x more compute within this decade. Oberlin shared interesting work that NVIDIA Research has been doing with design methodologies to meet the requirements of complex chip design. From NVCell, optimizing standard cell layouts using machine learning, to GPUs, accelerating RTL and gate-level simulation, AI for circuit design is offering compelling new directions. The panel painted a future where today’s AI capabilities come together to enable a new design paradigm, one with increased concurrency and fluidity in system design, making it possible to create personalized silicon solutions in just a few months of development time.
The Synopsys and Cerebras workshop, How Cerebras Does It: Building the Largest Chip Ever Made, and Delivering Unprecedented Deep Learning Acceleration, covered the second-generation Cerebras Wafer Scale Engine (WSE)—specifically, how a wafer can outperform some of the fastest supercomputers and enable researchers to create new neural network architectures and algorithms, impractical in existing infrastructure. We are talking 850,000 AI-optimized cores and 40Gb of on-chip memory organized on a wafer-sized chip manufactured on TSMC’s 7nm process technology. Dhiraj Mallick, vice president of Engineering and Business Development at Cerebras, described a new era of innovation in packaging and system design, hardware and software co-design, and algorithmic efficiencies. How do they make that happen? No surprise, it involves an incredible amount of innovation, state-of-the-art design technologies (by Synopsys), and some good old-fashioned engineering grit! The encouraging news is that AI is already here to help. Thomas Andersen, vice president of Engineering in the Synopsys Solutions Group, shared early results on running Synopsys DSO.ai™ on Cerebras’ WSE, being able to autonomously achieve 9% better energy efficiency for those million or so AI cores – not a bad day at the (virtual) office for AI!
In a virtual live panel hosted by Intel, Designing AI Super Chips at the Speed of Memory, industry experts from Intel, MemVerge, and Synopsys discussed the memory challenge in supercomputing and new approaches for cost-effectively configuring more capacity in a post-Moore’s law era. “The bandwidth or throughput of this IO interface really becomes a choke point in designing a very large chip,” said Brandon Wang, vice president of Corporate Strategic Programs and New Ventures at Synopsys. Why a choke point? Well, massive parallelism in computing algorithms is deployed in EDA in order to achieve the needed TAT in designing those super AI chips. In fact, HPC Wire underscores the message from the panelists when it reports that in the years to come, memory innovation will strive to decouple it from compute so that memory can independently scale.
All in all, the AI Hardware Summit was an exciting opportunity to learn from leading thinkers in edge computing about how the challenges in AI, ML, and deep learning are being solved in hardware design. Synopsys DSO.ai definitely took center stage as the world’s first AI application for chip design. In fact, Freund, who is also a contributor for Forbes, in his post-conference coverage ranks DSO.ai in his Top Ten News for AI in 2021.
If you are interested in attending next year’s event or other events on these topics and to keep abreast of what’s happening in AI hardware design, follow us at upcoming events.
Catch up on some of our other recent AI-related blog posts: