The Untold Story of How Google’s AlphaGo Inspired Synopsys Engineers to Transform Electronic Design Automation with Deep Reinforcement Learning
When Google’s AlphaGo triumphed over Go master Lee Sedol in 2016, Synopsys engineers Joe Walston, Stelios Diamantidis, and Thomas Andersen saw an opportunity to revolutionize the semiconductor industry. Inspired by AlphaGo’s use of reinforcement learning (RL), they began to work out how RL could also optimize and accelerate electronic design automation (EDA) techniques for silicon chips. Their idea was to use artificial intelligence (AI) and advanced machine learning (ML) models to take current design practices to a whole new level. And that’s exactly what they did.
Seven years later, Sassine Ghazi, president and chief operating officer at Synopsys, spoke at the Synopsys Users Group (SNUG) Silicon Valley 2023 conference championing what Walston, Diamantidis, and Anderson did as groundbreaking, noting how the now historic AlphaGo-Sedol match had become a “watershed” moment for the semiconductor history.
Ghazi was speaking as Synopsys launched the industry’s first full-suite of AI-enhanced design technology: Synopsys.ai. It’s a reinforcement learning-enhanced technology stack that has already given rise to 160 chip designs, and in doing so is stretching the limits of silicon chip and design team performance.
Stepping back in time, Alpha Go’s momentous 2016 win was followed by a second dramatic victory at the 2017 Future of Go Summit. This time, the Master version of AlphaGo beat Ke Jie, previously ranked the world’s number one player, in a three-game match.
Impressed by Alpha Go’s rapid evolution and ever-more powerful capabilities, Walston, Diamantidis, and Andersen began to build a detailed proposal to transform EDA with deep reinforcement learning. Ghazi, then GM of the Synopsys Design Group, greenlit the plan which set an accelerated six-month deadline for a minimum viable product (MVP) and stipulated unique recruiting guidelines.
“Although the team was led by senior Synopsys staff, we knew it was important to bring on AI engineers with non-EDA backgrounds who specialized in RL algorithms,” said Ghazi. “We didn’t want to overemphasize incremental thinking from domain experts who would quickly arrive at solutions based solely on traditional industry-specific processes.”
As Ghazi explained, RL algorithms that holistically analyze and rapidly solve complex, multi-layered challenges are particularly well suited for EDA. Unlike conventional ML models, RL is inherently adaptive and rapidly responds to environmental changes. For RL, time matters and experiences aren’t independently and identically distributed. Because the dimension of time is deeply buried in the mechanics of RL, learning is both continuous and dynamic.
Ghazi’s innovative development strategy and focus on RL enabled the small, eight-person team to rapidly ramp and deliver a fully functioning AI-powered EDA application in less than half a year. The earliest iteration of what ultimately became Synopsys DSO.ai™ (Design Space Optimization AI) was almost immediately trialed and used by customers to build better, faster, and cheaper semiconductors.
“We engaged chip companies right away and the feedback was amazing,” said Ghazi. “We knew we had succeeded when one of our first customers told us DSO.ai was ‘unbeatable.’ That’s because designs architected with conventional methodologies simply can’t match those automatically generated by DSO.ai, which rapidly—and intelligently—optimizes power, performance, and area (PPA).”
Launched in 2020, the first AI-enhanced design technology, DSO.ai, has now marked an impressive individual 100+ commercial tape-out milestone, with customers reporting 3x productivity increases, up to 25% lower power draw, and significant reduction in die sizes. Complementing DSO.ai from March 2023 came Synopsys VSO.ai™ (Verification Space Optimization) and Synopsys TSO.ai (Testing Space Optimization). Together, these three solutions comprise Synopsys.ai, a comprehensive EDA suite that uses the power of AI across all phases of chip development, from system architecture to manufacturing.
Synopsys continuously updates its AI-driven EDA solutions to further optimize processes for complex, evolving technologies such as chiplets, 2.5D advanced packages, and 3D stacked die. As Ghazi confirmed, Synopsys engineers are currently evaluating next-generation AI applications such as ChatGPT for EDA. Developed by OpenAI, ChatGPT harnesses reinforcement learning from human feedback (RLHF) to learn from mistakes, challenge incorrect premises, and accurately answer follow-up questions.
Although Synopsys.ai does not use the generative AI technology upon which ChatGPT is based, engineering teams are exploring how to best leverage a new generation of multimodal large language models (LLMs) to streamline internal processes and augment existing solutions.
“One potential use we see for these AI applications—including ChatGPT—is further simplifying ease of use and enabling customers to seamlessly navigate a full-stack design, testing, and verification process,” stated Ghazi.
To be sure, design requirements in the SysMoore era are increasingly demanding with multiple technologies converging in unified packages to address growing systemic and scale complexities. As Moore’s law blends with new innovations that address these complexities, independently analyzing individual components is no longer practical. Rather, engineers require hyper-convergent design flows to deliver comprehensive, simplified analysis of entire systems, including the multi-die architectures that are enabling them to go beyond Moore’s law.
Synopsys.ai minimizes design complexity by holistically optimizing PPA and fully automating repetitive tasks such as design space exploration, verification coverage, regression analytics, and test program generation. Centralizing and simplifying design flows empower engineers to focus on differentiation and migrate even the most advanced chiplet designs from foundry to foundry or from process node to process node.
“Synopsys is at the forefront of making multi-die design, verification, and testing efficient and cost-effective at every stage,” added Ghazi. “These next-generation chiplets will power a wide range of use cases, including AI and ML applications, advanced driver-assistance systems (ADAS), and high-performance computing (HPC) in tomorrow’s data centers.”
More than 1,000 engineers recently gathered at the Santa Clara Convention Center to attend SNUG Silicon Valley 2023. The world’s biggest annual chip design users event offers attendees many opportunities to learn about the latest semiconductor technology and trends, from CXL 3.0 and machine learning (ML) to multi-die systems and cloud native EDA tools. Catch up on highlights from the conference by reading these blog posts: