Posted by Alex Seibulescu on September 20, 2010
You may remember the suggestive metaphor of a butterfly flapping its wings in California and causing a tornado on the other side of the globe. No, I am not attempting to apply chaos theory to coverage (although… ;-)), I am merely picking up where I left off last time and make a case that the various parts that collectively define the coverage problem are tightly connected and that the decisions we take for each part will inevitably have a profound effect on the others. Let’s take a look at some of these connections and the potential pitfalls if we ignore them.
The connection between the coverage model and the checkers is pretty obvious, what good is perfect coverage if you haven’t checked anything? And thoroughly checking expected outcomes without generating sufficient scenarios is like getting hooked up to the most sophisticated EKG machine while sleeping and concluding that your heart works perfectly. Either way, it just doesn’t work; my buddy Coverage and his pal Checker always work as a team.
Next, there’s coverage model and coverage results interpretation. Sure, you can easily write crosses on any signals you want but are you prepared to handle the deluge of data that’s going to come your way? When you’re looking for the needle in the haystack, adding more hay is in general not the shrewdest move.
Now let’s look at my favorite, the more obscure connection between coverage model and coverage convergence. Can you develop your coverage model so that it is easier or faster to converge on it without of course compromising the quality of the model? You sure can and you sure should. Let me give you an example. Say you’re in Phoenix, Arizona and you plan to visit the Grand Canyon. You can do that by either visiting the North or South Rim. Both will achieve the same goal of seeing the Grand Canyon but if you pick as your coverage target the North Rim your journey will take longer and will be more difficult. Oh, and if you think it’s a stretch to use the Grand Canyon as a stand-in for a bug, there are some semiconductor companies out there whose functional bugs ended up in shipped products and I’m sure for them even the Grand Canyon seems tiny.
What I’m getting at is that everything else being equal, the fewer levels of logic between the signals you choose to sample in your coverage model and the signals you randomize, the better chance of convergence you have. As a corollary, the sooner you sample the signals, the sooner you will know whether you hit your coverage targets and therefore the faster you will converge. Sure, there will be some coverage targets that lie deep in your design and can only be sampled towards the end of simulation but let’s KISS whenever we can!
The connection between interpreting coverage results and coverage convergence seems to be pretty straightforward. You look at what targets have not been hit yet and if you can properly interpret what that actually means, you can decide if they need to be excluded or more effort needs to go into covering them. The trick though is the interpretation part. Think of the map from Phoenix to the Grand Canyon. You may know exactly where you are on the map but if you’re zoomed in too much, you won’t be able to tell how long you still have to drive or even what roads you will have to take next. Similarly, simply looking at the coverage results may provide you with precise data but only limited information. Knowing exactly how many times a coverage target was hit rarely translates into understanding exactly which feature has been verified. You will need to bring the coverage results to a higher level of abstraction if you want to use them as a map towards coverage convergence.
I deliberately left extracting the project risk factor from coverage results at the end. Although it is arguably the most challenging piece, more art than science as a wise friend of mine keeps reminding me, it should be crystal clear that it is strongly connected on all the other pieces of the coverage puzzle. The piece of mind that will hopefully engulf you once you push the tape-out button will depend on what kind of coverage model you designed, whether you properly matched coverage and checking, how well you interpreted the coverage results, and how far you pushed for coverage convergence. If you don’t believe that, there is a good chance that a butterfly flapping its wings in California will cause a chip failure somewhere in the world.
"Coverage is by now pervasive in most verification flows but has in the modest opinion of this blogger, yet to reach its full potential. Although I have spent most of my 18 years in EDA (ouch!) on the R&D side, I have always been a good listener to our customers' concerns. My hope is that this blog will be an informal venue for all of us to explore how to push the benefits of Coverage and related methodologies to new levels" —Alex Seibulescu