Functional Chip Design Verification: When Is It Truly Finished? 

Anika Malhotra, Will Chen

Nov 21, 2021 / 5 min read

With 60% of functional verification time spent on testbench development and debug and up to 40% of the time devoted to testbench bring-up and coverage closure, anything you can do to shorten these durations without missing bugs would be a welcome scenario. But, as chip designs grow larger and methodologies get more complex, it has become common for chip verification to consume thousands of CPU hours and for closure to take longer than ever. Given the relentless time-to-market pressures for new products, however, so much time spent on one activity—albeit a critical one—is more time than you might be able to afford.

How do you shift left on chip verification while accelerating coverage closure?

In this blog post, we’ll discuss how machine learning techniques help you find bugs earlier (even corner cases!), achieve faster coverage closure, and improve functional verification turnaround time.

Close up of person with glasses

Why Coverage Closure Keeps You Up at Night

For most design verification engineers, coverage closure is what keeps you awake at night. It’s that often-laborious process where you’re applying a seemingly unending series of stimuli to verify your design under test (DUT) based on the coverage goals you’ve established in your verification plan. You need to exhaustively exercise your design to find and fix bugs, but how can you be certain that you’ve caught bugs that might be hidden deep in your design?

You may dream of having a tool that provides 100% coverage: you’d feed in your design coverage properties, specify your constraints, and push a button for results. In reality, coverage closure is a compute- and labor-intensive process involving a lot of manual effort to achieve your set target goals. As shown in Figure 1, each of the three phases in the process carries different objectives, asking for different tool requirements:

  • In the early phase, you want to ramp up coverage as quickly as possible
  • In the intermediate phase, you’re running regressions and want to reduce coverage regression turnaround time
  • In the stable phase, you’re fixing bugs, regressions are holding steady, and you want to attain higher ultimate coverage within your schedule and cost parameters
Four Phases of SoC Verification | Synopsys

One of the biggest challenges to coverage closure is the lack of testbench visibility. Tuning a testbench is expensive and risk prone, with limited or no visibility into the stimuli distribution and no runtime option to change the stimuli distribution. How do you tweak your constraints for better output? How do you get to 100% coverage? Typically, you’re faced with a situation where you’re wasting time and resources due to over-constraints (where scenarios could get missed), under-constraints (which can generate illegal stimuli), or distribution bias (where some scenarios get generated more often than others). Adding to the pressure are high compute costs, stemming from repeated stimuli (even with different random seeds) and regressions that get longer to hit the last few coverage goals..

While verification may never truly be complete, you certainly want to reach a point where you can decide, with confidence, that a chip is ready to be released for software development on an emulation or FPGA prototyping platform, or for physical tape-out. How can you debug quickly and converge your key metrics to your target goals earlier, so you can shift everything left and signoff with confidence?

How AI/ML Can Increase Chip Verification Efficiency

Having transformed so many different application areas, artificial intelligence (AI) and machine learning (ML) are also making an impact on electronic design automation (EDA) tools. Today, it is possible to design chips (even chips for AI!) using AI/ML technologies. In the area of chip verification, tools enriched with AI/ML can enhance the coverage process through fast delivery of analytical insights. Bringing intelligence into coverage can increase verification efficiency by:

  • Reducing repeat stimuli generation
  • Increasing hard-to-hit and rarely/not hit rates
  • Providing stimuli distribution diagnostics and root-cause analysis

As AI/ML technologies accumulate knowledge from test to test, their ability to deliver increasing optimization benefits over regression runs strengthens. One type of ML called reinforcement learning—in which the learner independently discovers the sequence of actions to obtain maximum reward—facilitates faster and improved coverage. It exposes more bugs, including latent issues in the testbench, sometimes in the DUT too. And it reduces regression turnaround time. Instead of spending your time writing thousands of tests, using a tool infused with AI/ML technologies supports a holistic approach to coverage closure as shown in Figure 2, allowing you to:

  • Gain testbench insights
  • Accelerate coverage closure
  • Find more bugs early
  • Triage testbench issues quickly
Constrained Random Verification with ICO | Synopsys

Since coverage, in particular, produces so much data as simulations are run, it is a prime area in which to incorporate AI/ML technologies. For example, AI/ ML can automate the manual analysis feedback loop from a design verification engineer and build correlations between stimuli and different inputs. Even if the results aren’t fully accurate, the effort will yield a higher correlation. Having better insights into the correlation can help determine the subset of tests to run to generate results and, ultimately, accelerate closure compared to the traditional, manual process of running test after test.

In addition to saving time, a more efficient verification process can also save money. More tests mean more machine time. With more EDA functions moving to the cloud, reducing the volume of tests required means reducing costs spent on spot instances on the cloud or on on-premises compute resources.

Leading Functional Verification Solution Strengthened with AI and ML

The industry’s highest performance simulation solution, used by most of the top semiconductor companies, Synopsys VCS® functional verification solution features Intelligent Coverage Optimization (ICO) that brings AI/ML into its arsenal. The solution can be deployed at all stages of testbench development to provide testbench visibility and analytics. Its reinforcement learning technology accelerates and improves coverage, exposing more bugs and reducing regression turnaround time. The tool has successfully uncovered many issues in stable testbenches, such as constraint inconsistency failures, SystemVerilog assertion (SVA) failures, incorrect constraint (both under-/over-constrained) specifications, UVM driver/monitor/checker/scoreboard problems, out-of-design specification bugs, and RTL deadlock issues.

For comprehensive planning, coverage, and execution management, the VCS solution is natively integrated with Synopsys Verdi® automated debug system, Synopsys VC Formal™ next-generation formal verification solution, Synopsys VC Execution Manager, and Synopsys VC VIP. All are part of the Synopsys Verification Continuum® platform.

In Closing

Ultimately, when you’re embarking on functional verification, you want to meet your coverage goals and find more bugs in a shorter amount of time and with fewer tests. That’s the value that AI/ML bring to the verification cycle. Rather than getting stuck in a cycle of seemingly endless tests, wondering when you can consider your chip fully verified, functional verification technology with AI/ML capabilities can help you move to the next step with confidence.

Continue Reading