Analog Simulation Insights

 

AMS Verification at DVCon – Part III

To complete my review of the Analog/Mixed-Signal Verification session at DVCon-08; the last paper presented was by Walter Hartong and John Pierce of Cadence Design Systems.

7.3 Analog Mixed-Signal Verification: Can Modern Approach Replace the Traditional Way?

This presentation explored AMS verification from both the Analog and the Digital sides, as well as how the two can meet (reminiscent of one of my most popular posts here: Design Verification. Analog… meet digital. Digital… meet analog). As in the first two papers in this session (see Part I and Part II), the authors discussed how event-driven techniques are preferred at the SoC level for integration of analog block behavior. Also, as in the preceding paper from Chris Jones and Jeff McNeal of Synopsys, a hierarchical verification methodology was described.

When one looks at the relative performance figures in the author’s proposed hierarchical AMS methodology, one can readily see why big-D/little-a verification engineers might prefer the event-driven approach:

  1. Extracted transistor netlist including parasitic elements (0.5-0.1 X)
  2. Transistor level ideal circuit (1.0 X)
  3. FastSPICE simulation (5-20 X)
  4. Analog behavioral modeling (5-100 X)
  5. Real number modeling (50-500 X)
  6. Pure digital model (500-10,000 X)

There is a trade-off in performance versus accuracy that must be made in applying any hierarchical verification (and modeling) methodology. In this list it is assumed that the most accurate modeling approach, the extracted post-layout netlist, is also the slowest. The authors were probably referring to a flat post-layout netlist, accompanied by standard SPICE simulation. However, in my experience the combination of a hierarchical FastSPICE engine with hierarchical back-annotation of parasitics can actually deliver a large speedup over SPICE while simultaneously increasing the accuracy of results. What – you say? Increased performance and accuracy at the same time? That’s impossible!

Let’s look at this more closely. When you first consider that pre-layout simulation, which designers tend to do more of than anything else, is really a very unrealistic abstraction in and of itself, this may make more sense. It is a mistake to treat the pre-layout simulations as somehow “golden”. It’s the silicon that matters, and to get closer to the silicon you must do post-layout simulation. So, no argument so far on which model is most accurate.

Looking next at simulation performance, of course a FastSPICE simulator will provide higher speed over standard SPICE. But a total solution for post-layout verification, such as in Synopsys’ HSIMplus , also provides a great increase in capacity as well. This is a critical 3rd dimension in a hierarchical methodology trade-off that is sometimes overlooked while focusing only on performance vs. accuracy. By having the increased capacity to accurately simulate post-layout effects that no standard SPICE simulator could handle, you simultaneously increase accuracy as well. More capacity = more accuracy. I sometimes see designers getting hung up on matching FastSPICE results to pre-layout SPICE, when they should really be looking at the overall accuracy in terms of how closely it models the eventual silicon. The effects of modeling the parasitics in the power and signal nets, the coupling of sensitive nodes, and the layout proximity effects on transistors more than makes up for any loss of accuracy in FastSPICE from exchanging analytic device models for table models – for example.

So my hierarchical flow would look like this, in order of highest to lowest accuracy:

  1. FastSPICE post-layout simulation (5-20 X)
  2. Transistor level pre-layout SPICE simulation (1.0 X)
  3. FastSPICE simulation pre-layout simulation (10-100 x)
  4. Analog behavioral modeling in SPICE or Fast-SPICE (1-100 X)
  5. Real number modeling with event-driven simulation (50-500 X)
  6. Pure digital model (500-10,000 X)

These are approximate rules-of-thumb, as you can’t compare a FastSPICE simulation to SPICE when SPICE never even gets started because it chokes on the size of the post-layout netlist. Another point to be aware of, as the first two authors in this session at DVCon pointed out, is that there is no guarantee that an analog behavioral model will actually speed up simulation. Again, it’s a speed versus accuracy tradeoff in the model – but that’s a subject for another time.

With all these options and trade-offs to make in order to complete verification of an AMS SoC, it is important to start out with a documented verification plan – as the authors point out in their paper. Tools for developing verification plans are more prevalent in digital verification, but it is now becoming more critical that analog meet digital early in the design project. One master verification plan should be created which describes the objectives at each step, and the level of abstraction to be used.

Looking at other methods that are used in digital verification; the authors discussed how the concepts of random stimulus, automated self-checking, and coverage can be applied to AMS verification. My friends at Designer’s Guide Consulting have also proposed a method for applying some of these digital techniques to analog by using Verilog-AMS and scripting techniques, in their paper on Verification of Complex Analog and RF IC Designs. Do these digital verification concepts apply to analog and mixed-signal?

1. Random Stimulus

Random stimulus or constrained random testing is (excuse the expression) a much more logical concept when applied to digital designs. The concept is that you can catch unexpected errors that a directed test (i.e. one that a designer has specified) may not catch. For AMS verification, random stimulus of a digitally-controlled mixed-signal circuit would also seem to be straightforward. You can randomly generate digital vectors that are defined within a boolean space. But how do you define the space for all possible analog stimuli that would include “illegal” operating conditions? How far do you extend into the “illegal” input range? It is up to the designer/verification engineer to define the constraints because there is nothing as simple as a boolean space to search. Additional work is required if one wants to automatically flag illegal output conditions as well. It seems to me that for AMS this concept essentially reverts, for practical reasons, to a set of directed tests – possibly extended manually to “what-if?” something “bad” happens.

2. Automated self-checking

Automated self-checking in digital verification refers to the use of techniques such as assertions. Assertions generally are used to specify temporal conditions that must hold, and are specified by a designer as part of a functional specification; such as “this signal must stay high for 3 clock cycles after reset“. Again – not so directly applicable to analog because our circuits are not necessarily event-driven or temporal. However, in AMS designs there are states or conditions that we want to monitor to make sure they do/do-not occur. Maintaining a transistor within a safe operating area of terminal voltage is an example. Functions are sometimes built in to simulators to allow such conditions to be monitored. Synopsys’ HSIMplus CircuitCheck provides a sophisticated set of assertion-like checks that can be run either as part of the transient simulation, or as a pre-simulation test after the netlist is read-in and the circuit can be examined quasi-statically. The pre-simulation tests have the benefit of being vector-less, so they don’t require a directed test to be exercised.

3. Functional coverage

For digital verification, coverage can refer to “code-coverage” since one may want to know how much of the synthesized design has actually been exercised. That concept doesn’t apply to AMS, but it can be useful to have a figure of merit for how many of the designer-defined tests have been completed or how much of the operating range has been tested. For AMS verification some coverage items might be: a measure of simulations completed versus plan, completeness of the dynamic range covered, percentage environment & operating conditions and process corners verified.

In the end, a conclusion by the authors of “Analog Mixed-Signal Verification: Can Modern Approach Replace the Traditional Way?” was that “pure” analog and digital methods are essential for each designs within their respective domains, but that a common/integrated mixed-signal methodology does not yet exist.

What do you think? How does your project team integrate analog and digital verification strategies? How should “Analog Meet Digital“? I heard from a few people at DVCon who said they were looking for a place to discuss how to bring these two worlds together, and that while an AMS session at DVCon was good it tends to get lost amongst the main focus on digital methodology. Until AMS Verification has its own conference, I invite you all to use this space to participate, share, reply and comment on issues such as those discussed at DVCon.

-Mike

  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • LinkedIn
  • RSS
  • Twitter