From Silicon To Software


AI: The Next Chapter in the Evolution of Verification

chip verification ai
By Rob van Blommestein, Sr. Staff Product Marketing Manager, Synopsys Silicon Realization Group

It’s well understood that verification and debug take up a significant amount of time and are arguably the most challenging parts of chip development. As designs become increasingly complex and move down the nanometer scale, the problem gets compounded. Adding to the dilemma is the shrinking of market windows. Verification tools and technology must keep pace with this demand while delivering higher quality-of-results (QoR), faster time-to-results (TTR), and lower cost-of-results (CoR). The question is how. Simulator performance has always been and will continue to be critical to components in the verification process, but how can we go beyond simulator speed and capacity to achieve maximum quality and efficiency?

The use of artificial intelligence (AI) in electronic design automation (EDA) tools seems like the most natural step in the evolution of verification. AI has been at the forefront of powering electronics in our world today. It only makes sense that AI can and should be applied to the technologies that help to build these devices. In this blog post, I’ll explore why AI should play a key role in the evolution of verification.

How AI Facilitates Chip Verification

Let’s walk through a typical chip verification flow to get a better understanding of how AI can help. The architecture team starts with building a virtual model of the chip to analyze system performance. From there, the RTL model is developed and linting is done to capture any coding errors. Static verification then kicks off the verification process, where it is used to detect structural errors in the design. Formal verification can then be done to provide a deeper analysis and prove key properties of the design. At the same time, a testbench is developed and tests are run in simulation (and even in emulation) to meet the goals of the verification plan. The simulation results are then debugged, and regressions are run again until verification coverage goals have been met.

Static verification, although effective, can be noisy with a single design flaw creating hundreds or even thousands of violations. This is where AI can step in with automated violation clustering and root-cause analysis (RCA). For example, the Synopsys VC SpyGlass® platform for static verification and Synopsys VC LP for specific low-power static verification include this AI technology. During static verification, the violations are automatically grouped together based on similar characteristics using machine learning. From there, the engineer can utilize RCA to focus on identifying and fixing a particular violation within each cluster that in turn resolves the remaining violations within the corresponding cluster. This automation can improve debug efficiency by up to 10x.

Formal verification is the most effective method for detecting deep bugs in the design that simulation will most likely miss. To do this, formal uses a multitude of powerful engines to prove the often thousands of properties required during verification. Maximizing engine performance is critical to ensure formal verification is efficient. The Synopsys VC Formal™ product is the first formal solution to include AI and ML technology to maximize formal engine use. It uses on-the-fly ML learning as it processes each property and applies that knowledge to subsequent actions. The decisions made for each property are then stored so that future regression runs can leverage that information to achieve faster and stronger results.

Machine Learning Enhances Simulation Performance and Efficiency

Simulation accounts for roughly 65% of all bugs found in a design. The need to run frequent regressions quickly any time there are changes in the RTL means that simulator performance needs to be optimal or delays will ensue. AI lends itself well to a couple of instances for achieving maximum performance. The setup of simulation and regression runs settings can be time-consuming and can require significant expertise. As the code evolves and more regressions are run, the settings may need to be adjusted to achieve peak performance. Using ML to learn simulator options and automatically adjust them as needed can improve regression performance and efficiency. The Dynamic Performance Optimization (DPO) technology inside the Synopsys VCS® simulator uses ML to learn from prior regressions and tunes the simulator settings accordingly without user input.

The time to reach coverage closure has by far the biggest impact on regression performance. There have been strides in making this process less manual with constrained-random testbenches to automatic stimulus generation for missing coverage points, but this isn’t enough and much of the time spent could be duplicating coverage already achieved. The Intelligent Coverage Optimization (ICO) feature of Synopsys VCS simulator uses ML to optimize the quality of constrained-random stimulus, thereby providing deeper and more accurate insight into testing issues impacting coverage. This solution has been known to speed up coverage convergence by 2-3x.

The last step in the verification flow is debug, which goes hand-in-hand with simulation. Whenever a simulation regression run is complete, the reported failures need to be debugged. Since the code is constantly evolving to address bugs or added features, new bugs are continually being born. The process of analyzing and resolving these failures is manual and tedious. Here again, AI can help. The same automated RCA technology for static verification can be applied to debug. The Regression Debug Automation (RDA) capability inside of Synopsys Verdi® Automated Debug System automatically classifies the failures into bins based on failure characteristics. Those failures are then automatically triaged to identify if the failure is part of the design or the testbench. RCA is then applied to root out the causes of the failures within the specific bins. The RDA technology can improve the overall debug effort by 2x.

Pushing the Boundaries of What AI Can Do in Chip Verification

As with everything else, AI has found its way into verification. Its tentacles are reaching into nearly every aspect of verification, from static to formal to simulation to debug. Synopsys is on the forefront of the effort to push the boundaries of what AI can do in verification. To learn more about Synopsys’s AI-Driven Verification, you can download our white paper, “Better, Faster, and More Efficient Verification with the Power of AI.”

In Case You Missed It

Catch up on some other verification-related posts shared on the “From Silicon to Software” blog: