HOME    COMMUNITY    BLOGS & FORUMS    Coverage is My Friend
Coverage is My Friend
  • About

    "Coverage is by now pervasive in most verification flows but has in the modest opinion of this blogger, yet to reach its full potential. Although I have spent most of my 18 years in EDA (ouch!) on the R&D side, I have always been a good listener to our customers' concerns. My hope is that this blog will be an informal venue for all of us to explore how to push the benefits of Coverage and related methodologies to new levels" —Alex Seibulescu

Convergence Driven Coverage Modeling

Posted by Alex Seibulescu on January 10th, 2012

“Say what??” my friend shot back when I asked him if he heard about the latest trend in coverage modeling called CDCM or Convergence Driven Coverage Modeling. I thoroughly enjoyed myself watching him scour the most obscure corners of his memory in search of something resembling my question. After all there weren’t too many coverage related topics he wasn’t aware of, let alone “the latest trend” in coverage models. Of course he had no chance to find anything as I had just made up the term to throw him off his usual cool. However, once I started explaining what I had in mind, he got really enthusiastic.

Most verification methodology gurus will tell you that your coverage model needs to capture the higher level functional features of your chip so that once you hit your coverage you can be fairly certain that you verified the corresponding features. They are of course right however, the missing piece is the how to map those features into a matching cover target. One can easily imagine that there are multiple equivalent ways to capture a desired functionality with a cover goal, so the fundamental basis of Convergence Driven Coverage Modeling is simply that among all the equivalent cover models for a given feature, one should choose the one that is easiest for a Verification Engineer to target. As we all know, random constrained stimulus generation will get us only so far in terms of hitting all the coverage goals. After that, it is up to the DV ladies and gentlemen to figure out how the stimulus can be manipulated to hit those oh so frustrating remaining coverage holes. As an example take an arbiter whose job is to mitigate among competing requests for a set of resources, making sure that nobody starves, that interrupts and errors are adequately addressed, that performance doesn’t take a hit, etc. Ultimately there will be one or more state machines that will do the heavy lifting and one could develop a cover model that will capture all reasonable state combinations. Hitting the entire cover model will likely ensure that the arbiter performs as required. However, if some of the state combinations are not hit, as will likely be the case, figuring out how to properly sequence the various requests, error injectors and interrupts from the verification agents in order to generate that particular scenario will be a daunting task. Instead, a serious attempt should be made to “push out” the coverage model as close to the interface of the block as possible and create an equivalent one that is easier to comprehend for somebody who doesn’t have intimate knowledge of the implementation. To do this, the designer’s help may be required but instead of asking for it in the heat of the verification closure battle when every minute to tape out counts, this can be done upfront as part of the verification planning process. Coverage closure is back in the hands of the Verification Engineer, everybody is happy.

After finishing the discussion, I invited my friend of a round of mini-golf. The deciding course was one where you either had to get the ball to enter a long tunnel and once you did that, the ball always ended up in the hole or, you could take many more hits to go around the obstacle and eventually get to the hole. I chose the tunnel approach and won. Q.E.D.

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Coverage convergence, coverage driven verification, Coverage model, efficient verification, Functional coverage, metric driven verification, verification planning | Comments Off

High Level Coverage

Posted by Alex Seibulescu on October 31st, 2011

Unless you’ve completely isolated yourself from the wisdom of Verification and EDA pundits, you must have heard at some point that High Level Verification and High Level Synthesis are the way of the future. This has been true for at least the past 10 years and most likely is still going to be true for the next 10. The value is obviously there but there are a myriad of t’s to cross and i’s to dot until the moment some form of a high level flow will become a viable alternative to RTL verification. In the meantime we still need to find a way to make the current tried and true verification flows more efficient. So as I always do when I grapple with existential verification topics, I paid a visit to my friend Coverage and as always, he had an answer for me: High Level Coverage.

The idea is pretty straightforward. Modern verification tools (hint, hint) allow you mix a SystemVerilog testbench with a DUT assembled with high level SystemC or C++ models, many of which can be easily obtained off-the-shelf. Gradually, the high level models can be swapped out for their more timing and power accurate RTL equivalents until we’re completely back into the RTL world. This appears to be an increasingly popular approach to deal with the ever growing verification task we’re all aware of. However, what is not yet recognized is the ability to shift a part of the RTL coverage closure task to the higher design abstraction level. Let me explain. Some verification tools (more hint, hint) allow you to tap into the internal signals of the high level models and sample them in SystemVerilog functional covergroups or properties. This opens up the opportunity to develop and debug both the stimulus and the coverage side of the testbench in a much more efficient setting. Tests can be developed and graded based on the collected coverage, constraints can be relaxed or tightened and the functional coverage model can be refined before any RTL is available. The setup can then be re-used when the high level models are replaced with RTL thereby bringing a significant boost in overall productivity. Invariably, there will be some changes required to extend the transaction level stimulus to a finer controlled one, new coverage targets may be added to test details not available in the virtual models, but a significant part of the testbench and coverage model development and debugging time has been shifted to an earlier stage where productivity can be levels of magnitude higher.

Some people never get old and learn to adapt to the ever changing times. My friend Coverage is definitely of them.

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Coverage convergence, coverage driven verification, Coverage model, efficient verification, Functional coverage, metric driven verification | Comments Off

Second Class (Coverage) Citizens

Posted by Alex Seibulescu on September 30th, 2011

My friend Coverage turned into a revolutionary. We live in tumultuous times so this may not come as a surprise but seeing my friend pacing up and down the room, threatening imaginary adversaries was unnerving so I had to get to the bottom of it. What could possible turn my mild mannered friend into a raging firebrand? Once he settled down a bit, things started to clear up. Turns out that for a while now, people have been turning a blind eye on those last few percent of coverage holes as long as they did not belonged to the elite of cover targets, those that always need to be properly taken care of. “Even if there’s a bug in that area, we can always come up with a workaround or fix it in software” people would say, and who can blame them? Deadline pressures are not for the faint of heart and 2%-3% of obscure coverage targets are not going to stand in the way of tape-out bliss, right? Well, it turns out that with the relentless increase of design sizes and complexity and the worrisome shortening of the verification cycle, the number of second class coverage targets has swollen, their voice has become louder and they now threaten the verification establishment. It is ever more likely that continuing to ignore an increasing number of coverage holes will eventually lead to a silicon bug for which no quick ECO or software fix will be available and disaster will strike.

Since we’re all pragmatists, we know that some un-hit coverage targets will always end up being waved off but wouldn’t it be better to make an upfront decision which coverage goals we’re ready to grudgingly sweep under the carpet if need be, and let sound engineering rather than last minute time pressure be the judge of our compromise? One could mark such cover targets with special attributes as part of the verification plan so that it is properly documented and tracked rather than leave it to a hasty decision in the heat of the tape-out battle.

Treat your cover targets well, even the less important ones and my friend will once again be by your side when the tape-out bell rings.

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Coverage convergence, coverage driven verification, Coverage model, Tape-out criteria, verification planning | 1 Comment »

Silicon Billionaires

Posted by Alex Seibulescu on August 24th, 2011

There are many of them these days. They used to be multi-millionaires but you know how these things go. Now, before you get too excited to find out the latest scoop on the yacht sailing, private jet flying, Davos skiing crowd, remember that I don’t typically hang out with Larry, Steve or Mark, I hang out with my friend Coverage. So this is not about the folks on the cover of Forbes Magazine, it’s about our favorite chips and the billion transistors they pack these days. More precisely it’s about verifying that a billion switches turning on and off, somehow harmoniously join forces to perform billions of instructions per second, transfer the proverbial Library of Congress across the country in minutes or make your picture on the iPad look better than the reality. How do we make sure that these ever growing billionaires do what they’re supposed to without collapsing under the enormous amount of verification data they generate?

Armed with traditional wisdom I went and told my friend Coverage that the only way forward is to raise the level of abstraction at which we describe the functionality of the billionaires. “Think about what happened when we moved from gate level to RTL”, I said. As it often happens lately, his answer took me by surprise. “People have been talking about transaction level modeling, SystemC, C++ and so on for a long time”, he said “but the bulk of verification is still done at RTL”. We went back and forth trying to figure out whether this is because of timing and power worries or something else perhaps but we eventually agreed that whatever the reason, things are what they are. Maybe they’ll change one day but what do we do in the meantime? And then it clicked. Rather than fighting with the abstraction level of the design description, how about raising the abstraction level of the verification data generated at RTL? Synthesize coverage data at the level of the verification plan, build infrastructure to track higher level transactions via smart log files, build tools to analyze protocols, etc. Verification can continue to be done at RTL but the analysis of the generated data should be raised to a level where it can be dissected and comprehended much easier. But wait! Aren’t we simply shifting a highly complex hardware modeling problem to an equally challenging tool development problem? Maybe, but challenging tool development is what we like to do ;-) My friend Coverage and I suddenly felt like a billion is not such a large number anymore.

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in coverage driven verification, Coverage report, efficient verification, Tape-out criteria | Comments Off

Plan the Coverage, Cover the Plan

Posted by Alex Seibulescu on July 12th, 2011

These days it seems that you need to have a plan for everything. You need short, medium and long-term plans, you need backup and alternative plans, you need business and execution plans, you need a plan to eat (“I already have a dinner plan”), you need a plan to relax (“I’m working on my vacation plan”), and you even need a plan if you don’t want to do anything (“I plan to do nothing this afternoon”). Clearly, without plans, the world as we know it will cease to exist so I had to ask my friend Coverage, what his thoughts were on the whole planning business. To my surprise he got very agitated. “Planning and Coverage go hand in hand”, he said, “but they are not synonymous and yet people often use them interchangeably”. “Verification needs a plan, to do proper verification, you need coverage, to do proper coverage, you need to plan for it, and once you have the plan, you need to make sure you cover it, but coverage and the plan are not one and the same”. With that, he grumbled away, leaving me dazed and confused. I made a plan to revisit the topic once the dust of excessive wisdom crashing on my head settled.

Come to think of it, it makes perfect sense. Coverage is an important (if not the most important) part of the Verification Plan but the plan can, and most of the time does contain other metrics (bug rate for instance) or tasks (set of directed tests for example) that are tracked for some particular items of the plan. On the other hand, mapping back coverage results to their corresponding item in the plan raises the level of abstraction of the generated coverage data thereby providing better insight into the quality of verification achieved by hitting (or not hitting) the respective coverage targets.

Separately, coming up with a good plan for the coverage model and a good plan on how to close coverage on it are equally important. Combining the right coverage methodology with the right tools can make a significant difference in verification efficiency without any sacrifice in verification quality.

Finally, even the best Verification Plan is meaningless if one does not make sure that all its items are properly covered.

I plan to go back to my friend and tell him I eventually figured out what he was trying to tell me. Just to make sure I got everything covered.

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in coverage driven verification, Coverage model, Coverage report, efficient verification, metric driven verification, verification planning | 3 Comments »

D(esign) A(wesome) C(hips) is coming up!

Posted by Alex Seibulescu on June 2nd, 2011

My friend Coverage and I will be in the temporary center of the Universe a.k.a. Conversation Central Wed between 3-4pm, stop by and chat with us!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | Comments Off

Too Much of a Good Thing…

Posted by Alex Seibulescu on May 25th, 2011

The few of you who regularly read this blog, may have wondered what happened to my friend Coverage in the past 3 months. It turns out he has been on a worldwide quest to collect wisdom from verification experts near and far. The other day I bumped into him and he was clearly troubled. After some prodding he grudgingly admitted to his concerns. “People are amassing coverage data as if it was an inflation hedge”, he blurted. “Soon it will overwhelm them to the point where it will be difficult to extract useful information from it and they will start ignoring it”.

I left my friend thinking he was clearly exaggerating the extent of the problem. People get pessimistic streaks at times, after all, the other day the world was supposed to end according to some. But then I realized that indeed the temptation to trade off quality for quantity is great when it comes to writing and collecting coverage. Developing quality coverage targets, the kinds that give you the warm, cozy feeling of verification confidence is by no means trivial, whereas crossing a few signals and generating massive amounts of coverage targets that may or may not be relevant is a piece of cake. Besides storage is cheap, so why not collect it, we’ll figure out later what to do with it. To some degree this reminds me of how taking pictures has changed since digital cameras became ubiquitous. In the old days, your film could hold only so many pictures and they needed to be printed so a lot more thought went into releasing the shutter. These days I find myself clicking away voraciously, always thinking I will go and select the good pictures later which of course never happens. Disk is cheap, right? Well yes, but when a friend comes along and I want to boast about our latest vacation, I can quickly sense the boredom setting in after seeing the 10th version of the same picture. The essence of the vacation gets lost.

Back to our coverage problem. First, we need to make sure we create a quality coverage model, one that covers the test plan, no pun intended. Planning tools help to keep track of that to some extent but developing the coverage targets is still left to the skilled DV engineers. Second, we need to stop collecting coverage on targets that have been hit over and over and over again. Tools can help again by stopping to count after a specified limit. But then there’s a third component, one that has the potential to affect our coverage collection in a more subtle way. We all want to steer the stimulus towards hitting our coverage holes for reasons. But there is a flip side to this, how about steering the stimulus away from coverage that’s been hit over and over and over again? Kind of like a camera that makes you move and take different pictures all the time. I bet my friend will want one of those on his next worldwide tour.

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Coverage convergence, coverage driven verification, Coverage model, Coverage report, Functional coverage, Scenario coverage, Stimulus generation | Comments Off

Cloud Cover

Posted by Alex Seibulescu on February 15th, 2011

After a surprisingly long period of sunny skies, the clouds have returned to the Bay Area. Now, although some say that predicting when a chip will be ready for tape-out is akin to forecasting the next storm, I have not decided to subtly shift the topic of this blog to the exciting world of meteorology. Instead, I wanted to explore the nexus of coverage, the kind my friend is so enthused with, and cloud computing. If you haven’t yet heard about cloud computing, you’re probably reading this blog by accident but that’s ok, my friend and I are highly socially predisposed and we’re always happy to meet new people. There are many significantly better descriptions of what cloud computing really is but I will offer my own to keep things simple. According to my irrelevant opinion, cloud computing is a combination of hardware and software resources provided as a service to some end consumer. The cloud part comes from the fact that you don’t really have to know or care about where these services reside or come from, kind of like the clouds, you don’t know where they come from, where they’ll go next, or how high up they are, only whether they provide shade, rain, that sort of thing. In any case, cloud computing is part of our new reality, so I asked my friend whether he thinks there is any potential symbiosis between the lofty clouds and our daily challenges. The answer came with lightening speed and in a thunderous voice.

Designs are notoriously getting larger at a rapid pace and that becomes uniquely alarming if you wear verification engineering shoes.  One way to cope with the formidable task of verifying them is to collect all sorts of coverage data and use that as a guide for assessing verification completeness, areas to focus verification on, etc. Naturally, this translates into overhead in both simulation performance and disk usage as all this voluminous data needs to be generated and stored. Although a strong case could be made for using the cloud to address the performance flavor of overhead, my friend vigorously honed in on the second aspect, storage. If your case is like that of many other organizations in the semiconductor business, you may also have discovered that disk storage has increasingly become an important slice of your overall IT infrastructure cost. What fraction of the total pie this represents, of course depends on many factors and will vary from site to site but the bottom line is that it is a problem today, it will become a bigger problem tomorrow, and cloud computing may be one way to tackle it. Picking Amazon as an example, my friend pointed out that even the middle of the pack “m1.large” standard cloud instance comes with a temporary local storage of 850GB.  That’s a lot of disk space that gets included at no extra cost as part of doing business in the clouds.  One could easily imagine running a regression in the cloud, collecting coverage from each test, merging it all out there and only permanently storing or downloading the merged database for further processing. The temporary data for each regression test, the biggest component of the peak disk usage, can be stored on the local “free” cloud storage and then discarded after merging.

Identifying an opportunity and taking advantage of it are of course not synonymous. Simulation vendors need to provide integrated cloud solutions and verification teams need to unleash some innovative thinking to fully take advantage of them, but the possibilities are tantalizing.  Just ask my friend. He’s seen a bright light behind the cloud coverage and is convinced that getting to the clouds is, for once, not rocket science!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in cloud computing, coverage driven verification, coverage merging, Coverage report, efficient verification, Functional coverage | 2 Comments »

Test the Testbench?

Posted by Alex Seibulescu on January 13th, 2011

If you’re nostalgically inclined like me, you probably fondly remember the times when testbenches were nothing but initial blocks with assignments and forces. Alas, those days are long gone. The serenity of the static pure Verilog testbenches has been replaced by the turbulence of the dynamic SystemVerilog ones where objects come and go as they please. Granted they brought revolutionary progress to usability, re-usability, scalability, portability, flexibility but let’s not forget that we paid for all these “bilities” with a significant increase in testbench complexity. Let’s face it, verification engineers of yore had to “just” understand the design in order to do a magnificent job at verifying it whereas their modern day counterparts have to acquire significant object oriented design skills as well as ramp up on VMM, UVM, OVM, etc. before they even attempt to write a single line of verification code. Add to that the fact that design sizes grow faster than the US National Debt and that testbenches can’t afford to be left behind and you’ll see why the other day I popped this question to my friend: how do we know that the testbench works as we intended or in other words how do we test the testbench?

As you’ve probably already guessed he had an answer handy and yes, it did involve collecting coverage. Just like in any other verification exercise, figuring out what has not been tested is of crucial importance and to find out which parts of the testbench have not been properly exercised can be identified by the right coverage measurements. Line coverage would be one way to go but because of the many possible but unused configurations in both internal and external VIPs, is most likely not the most practical. Conversely, capturing the interesting scenarios that the testbench is supposed to generate with some simple covergroups and making sure that they are appropriately covered will go a long way towards ensuring that the testbench is doing its primary job of generating comprehensive stimuli to the DUT. Low coverage in the DUT may also expose the same testbench problems but it would take a lot more time to analyze and get to its root cause so why not catch the problems where they’re easy to corner and identify? Besides, this exercise can be started even before the DUT is ready for showtime with all the benefits associated with that.

To wrap up, in addition to stimulus coverage, my friend recommends to make it part of your New Year’s resolution to add scenario coverage targets to your testbench. He predicts that just by doing this, 2011 will be a much better verification year than you’ve anticipated!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in coverage driven verification, Coverage model, efficient verification, Functional coverage, metric driven verification, Scenario coverage | 2 Comments »

‘Tis the Season of Sharing (in Parallel)

Posted by Alex Seibulescu on December 14th, 2010

It is once again that time of the year when I panic. What gifts, for whom, do they even need them, what if I don’t, etc.  I am naturally always on the lookout for some good advice and so I asked my friend to share some wisdom. “I give everybody the gift of coverage of course”, he replied slightly raising his eyebrow as if to remind me to refrain from asking the obvious. “Everybody needs it even if they don’t know it yet, and if they don’t get it, there will always be a hole in their lives.” I nodded in thoughtful agreement while at the same wondering whether the gifts of coverage end up piling up in the equivalent of my garage somewhere, or whether people have the patience to go through the many levels of wrapping to extract its true value. I was still thinking about this when I entered the UPS office where I had to pick up an undelivered package and I somewhat irritated noticed that there was a line at the counter. Of course, it is December and shipping companies are in high gear, kind of like computers in the month before tape-out. They need to collect all the packages, sort them and then distribute them to the appropriate destination. Similarly, coverage data needs to be collected, merged, processed and reports sent to the appropriate stakeholders. Just like processing the incoming packages needs to be done simultaneously at multiple locations, coverage data from the many regressions runs needs to be merged in parallel to prevent the entire verification process to slow down to a snail’s speed (no punt to snail mail maliciously intended). So while the speed with which coverage data is produced retains its critical importance, the speed with which the coverage data is collected and merged should not be overlooked lest it becomes the frightening bottleneck. The key of course is to efficiently parallelize the process so this season when you share my friend’s gift, don’t forget to ask your shipping company woops I mean Verification vendor, how to do it in parallel!

With that, my friend and I would like to thank you for reading this blog, wish you the happiest of holidays and hope to see you back in the New Year!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in coverage driven verification, coverage merging, Coverage report, efficient verification, metric driven verification | Comments Off