Formal Chip Design Verification in the Cloud: EDA Tools 

Ahmed Elzeftawi, Pratik Mahajan

Nov 15, 2021 / 6 min read

Formal chip design verification has been gaining a lot of traction in recent years due to the ever-increasing challenge of verifying all possible corner-case behaviors, along with greater industry adoption and acknowledgement of its power.

With formal verification, the more compute resources, the better. After all, the goal is to identify bugs faster, uncover deep corner-case bugs, and functionally verify and sign off the block. The stakes are particularly high in safety-sensitive areas such as automotive, medical, and security applications. As you continue to add horsepower to your compute infrastructure, you’re bound to experience definitive performance gains both for reduced turnaround time and better convergence. However, how will your budget fare?

This is where the cloud can provide a valuable boost, enabling a shift left in chip verification for faster turnaround times. Cloud computing has enhanced a variety of areas in terms of better quality of results (QoR), time to results, and costs of results, and formal verification is very much included in this mix. Indeed, EDA in the cloud is fast-proving to be a way to drive innovation forward for a semiconductor industry challenged by exploding computational needs along with continued pressure to reduce cycle times for chip design and verification. As chips grow larger and more complex, especially for in-demand applications like high-performance computing and AI, while engineering workloads also grow, the elasticity of the cloud provides compute flexibility to achieve faster time to results.

In this blog post, we’ll explain how migrating formal chip verification to the cloud yields massive benefits to reduce turnaround time by up to 40X and achieve up to 3X better convergence. We use the AWS Graviton2-based Amazon Elastic Compute Cloud (Amazon EC2) instances for achieving the runtime speed and convergence on the Synopsys VC Formal™ next-generation formal verification solution. Read on to learn more.

Abstract glowing grid surrounded by particles

What is Formal Verification?

For years, formal methods of verification were relegated to the world of academia and considered to be less-than-accessible to the typical verification engineer. The steps were rather daunting, and many had to think hard about whether they wanted to commit to using this methodology. Today, all of the top semiconductor companies are using formal in various design groups; it’s now considered to be mainstream. In concert with simulation, a formal chip verification methodology can help find more bugs faster, before the simulation testbench is ready, while making more efficient use of overall verification resources. Today, you can sign off on your blocks with formal verification.

While simulation consists of testing, it’s not practical to simulate all input sequences to exhaustively verify a SoC design. Formal verification methods provide exhaustive functional verification and design verification coverage closure, using mathematically based languages, techniques, and tools. With a formal specification (high-level behavior or properties) and a formal description of the implementation (design RTL), you can complete signoff on your block by gaining a better understanding of your system as inconsistencies, ambiguities, and incompleteness are revealed.

Today’s tools have simplified the process for formal chip design verification, delivering the speed, capacity, and flexibility to work on some of the most complex SoC designs. What’s more, formal verification tools include comprehensive debug and analysis techniques to quickly identify root causes. Formal apps integrated into these tools automate the checking of specific areas, such as user-defined properties, datapath validation, security flow data leak/integrity issues, and automotive functional safety verification.

A Cost/Performance Trade-off You’ll Want

While the advantages of formal verification are clear, executing on a formal strategy does require considerable compute resources. On-premises compute infrastructure calls for a substantial investment, one that many startups are unwilling or unable to make. Or, what if you do have resources, but, as you near tapeout, your infrastructure cannot scale to the level needed to close things out? Or there is a time pressure to validate the fix for a post-silicon bug. These common scenarios are well suited for a cloud implementation.

Cloud-based compute resources provide the flexibility to scale out quickly to hundreds or thousands of cores when and as needed. In contrast to simulation, where tasks must happen simultaneously, formal verification consists of bite-sized problems that can be solved independently. So, these problems can be spread across multiple compute resources in parallel. Scaling up the number of cores available reduces the overall turnaround time and yields better convergence.

Formal verification is also an exponential set of problems that can take an infinite amount of time to solve, involving use of heuristics, engines, and abstractions to solve in tractable time. Given this, verification engineers typically end up deploying multiple strategies in parallel to determine which are successful; in these cases, having flexible access to potentially thousands of cores can accelerate turnaround time substantially. Additionally, there are scenarios where time is of the essence and ready access to a huge volume of compute resources can reduce turnaround time from days to hours. Post-silicon debugging during the tapeout phase and validation of engineering change orders (ECOs) fall into this category.

Scaling EDA Workloads on the Cloud

AWS and Synopsys have collaborated closely on scaling exercises of EDA workloads on a cloud environment using Synopsys VC Formal next-generation formal verification solution. The solution is designed with a hybrid workflow so that users can start the process on-premises and, once they’ve reached their on-premises compute limitations, they can save their work and resume it on the cloud without having to restart the process. Without any human intervention, the solution can autonomously take advantage of available compute cores, including Amazon EC2 Spot Instances, saving effort on resource management. It also has adaptive machine-learning orchestration capabilities that assigns the most appropriate formal engines to specific verification problems—a feature that runs even more efficiently on the cloud.

AWS provides a suite of cloud solutions; in this blog post, we’ll focus on two that are well suited for EDA flows:

  • AWS Graviton2 processors based on the Arm® architecture, which power Amazon EC2 general-purpose (M6g, M6gd, T4g), compute-optimized (C6g, C6gd, C6gn), and memory-optimized (R6, R6gd, X2gd) instances
  • Scale-Out Computing on AWS solution (SOCA), which provides deployment and operation of a multiuser environment for computationally intensive workflows

In three different sets of runs, scaling from 12 cores up to 1,200 cores, the two companies tested the VC Formal solution in a SOCA environment on three different configurations:

  • AWS Graviton2 processors
  • X86 CPU with hyperthreading
  • X86 CPU without hyperthreading

Hyperthreading expands the capacity of the system. Without hyperthreading, 16 physical cores maps to 16 CPUs on the system. With hyperthreading, the 16 physical cores become 32 CPUs. The performance delta between the two configurations is typically 10% to 20%. However, the hardware cost can be much higher with more machines. As a custom processor, AWS Graviton2 does not use hyperthreading; instead, every CPU is a physical core.

In the exercises, hyperthreading disabled on the x86 CPU generated faster results but higher costs. With hyperthreading enabled, the performance was in line with that of the AWS Graviton2 processor. However, the big takeaway was, the AWS Graviton2 results demonstrated substantially less cost for comparable runtimes, along with better scaling.

Scaling the VC Formal solution from 12 to 1,200 cores on the AWS Graviton2 processor resulted in a 45x speedup with a 2x increase in compute cost, along with roughly 90% convergence (Figure 1). That’s a shift left in formal verification, from two days to less than an hour.

VC Formal Solution Time Taken vs. Cores | Synopsys

Figure 1. Scaling the VC Formal solution from 12 to 1,200 cores on the AWS Graviton2 processor resulted in a 45x speedup.

As for SOCA, this isn’t the first time that the two companies have utilized the solution for a Synopsys EDA workload. In a series of previous exercises, the two companies showed that Synopsys Proteus full-chip mask synthesis solution can successfully scale to 24,000 cores for a single design with 98% scalability, while also taking advantage of cost savings on Amazon EC2 Spot Instances.

Advancing Semiconductor Innovation

The virtually limitless resources of the cloud provide the horsepower to perform massive simulation, formal chip design verification, timing signoff, and physical verification tasks to enhance quality of results. From a cost standpoint, the cloud is attractive as it minimizes, or in some cases eliminates, an organization’s own compute infrastructure investment.

Formal verification, with its exhaustive nature, benefits greatly from a cloud environment. The exercises between AWS and Synopsys have demonstrated that a tool such as the VC Formal solution can deliver the big performance and convergence boost needed at a substantial cost advantage when run on cloud compute resources. The collaboration between the two companies exemplifies how cloud-based EDA solutions provide another tool in the toolbox in the semiconductor industry’s relentless drive to achieve better quality, time to results, and cost of results. For formal verification, running workloads on the cloud provides the shift left that’s needed to reduce runtimes and achieve better convergence.

Continue Reading