HOME    COMMUNITY    BLOGS & FORUMS    Coverage is My Friend
Coverage is My Friend
  • About

    "Coverage is by now pervasive in most verification flows but has in the modest opinion of this blogger, yet to reach its full potential. Although I have spent most of my 18 years in EDA (ouch!) on the R&D side, I have always been a good listener to our customers' concerns. My hope is that this blog will be an informal venue for all of us to explore how to push the benefits of Coverage and related methodologies to new levels" —Alex Seibulescu

  • Archives

Convergence Driven Coverage Modeling

Posted by Alex Seibulescu on January 10th, 2012

“Say what??” my friend shot back when I asked him if he heard about the latest trend in coverage modeling called CDCM or Convergence Driven Coverage Modeling. I thoroughly enjoyed myself watching him scour the most obscure corners of his memory in search of something resembling my question. After all there weren’t too many coverage related topics he wasn’t aware of, let alone “the latest trend” in coverage models. Of course he had no chance to find anything as I had just made up the term to throw him off his usual cool. However, once I started explaining what I had in mind, he got really enthusiastic.

Most verification methodology gurus will tell you that your coverage model needs to capture the higher level functional features of your chip so that once you hit your coverage you can be fairly certain that you verified the corresponding features. They are of course right however, the missing piece is the how to map those features into a matching cover target. One can easily imagine that there are multiple equivalent ways to capture a desired functionality with a cover goal, so the fundamental basis of Convergence Driven Coverage Modeling is simply that among all the equivalent cover models for a given feature, one should choose the one that is easiest for a Verification Engineer to target. As we all know, random constrained stimulus generation will get us only so far in terms of hitting all the coverage goals. After that, it is up to the DV ladies and gentlemen to figure out how the stimulus can be manipulated to hit those oh so frustrating remaining coverage holes. As an example take an arbiter whose job is to mitigate among competing requests for a set of resources, making sure that nobody starves, that interrupts and errors are adequately addressed, that performance doesn’t take a hit, etc. Ultimately there will be one or more state machines that will do the heavy lifting and one could develop a cover model that will capture all reasonable state combinations. Hitting the entire cover model will likely ensure that the arbiter performs as required. However, if some of the state combinations are not hit, as will likely be the case, figuring out how to properly sequence the various requests, error injectors and interrupts from the verification agents in order to generate that particular scenario will be a daunting task. Instead, a serious attempt should be made to “push out” the coverage model as close to the interface of the block as possible and create an equivalent one that is easier to comprehend for somebody who doesn’t have intimate knowledge of the implementation. To do this, the designer’s help may be required but instead of asking for it in the heat of the verification closure battle when every minute to tape out counts, this can be done upfront as part of the verification planning process. Coverage closure is back in the hands of the Verification Engineer, everybody is happy.

After finishing the discussion, I invited my friend of a round of mini-golf. The deciding course was one where you either had to get the ball to enter a long tunnel and once you did that, the ball always ended up in the hole or, you could take many more hits to go around the obstacle and eventually get to the hole. I chose the tunnel approach and won. Q.E.D.

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon