“Say what??” my friend shot back when I asked him if he heard about the latest trend in coverage modeling called CDCM or Convergence Driven Coverage Modeling. I thoroughly enjoyed myself watching him scour the most obscure corners of his memory in search of something resembling my question. After all there weren’t too many coverage related topics he wasn’t aware of, let alone “the latest trend” in coverage models. Of course he had no chance to find anything as I had just made up the term to throw him off his usual cool. However, once I started explaining what I had in mind, he got really enthusiastic.
Unless you’ve completely isolated yourself from the wisdom of Verification and EDA pundits, you must have heard at some point that High Level Verification and High Level Synthesis are the way of the future. This has been true for at least the past 10 years and most likely is still going to be true for the next 10. The value is obviously there but there are a myriad of t’s to cross and i’s to dot until the moment some form of a high level flow will become a viable alternative to RTL verification. In the meantime we still need to find a way to make the current tried and true verification flows more efficient. So as I always do when I grapple with existential verification topics, I paid a visit to my friend Coverage and as always, he had an answer for me: High Level Coverage.
These days it seems that you need to have a plan for everything. You need short, medium and long-term plans, you need backup and alternative plans, you need business and execution plans, you need a plan to eat (“I already have a dinner plan”), you need a plan to relax (“I’m working on my vacation plan”), and you even need a plan if you don’t want to do anything (“I plan to do nothing this afternoon”). Clearly, without plans, the world as we know it will cease to exist so I had to ask my friend Coverage, what his thoughts were on the whole planning business. To my surprise he got very agitated. “Planning and Coverage go hand in hand”, he said, “but they are not synonymous and yet people often use them interchangeably”. “Verification needs a plan, to do proper verification, you need coverage, to do proper coverage, you need to plan for it, and once you have the plan, you need to make sure you cover it, but coverage and the plan are not one and the same”. With that, he grumbled away, leaving me dazed and confused. I made a plan to revisit the topic once the dust of excessive wisdom crashing on my head settled.
If you’re nostalgically inclined like me, you probably fondly remember the times when testbenches were nothing but initial blocks with assignments and forces. Alas, those days are long gone. The serenity of the static pure Verilog testbenches has been replaced by the turbulence of the dynamic SystemVerilog ones where objects come and go as they please. Granted they brought revolutionary progress to usability, re-usability, scalability, portability, flexibility but let’s not forget that we paid for all these “bilities” with a significant increase in testbench complexity. Let’s face it, verification engineers of yore had to “just” understand the design in order to do a magnificent job at verifying it whereas their modern day counterparts have to acquire significant object oriented design skills as well as ramp up on VMM, UVM, OVM, etc. before they even attempt to write a single line of verification code. Add to that the fact that design sizes grow faster than the US National Debt and that testbenches can’t afford to be left behind and you’ll see why the other day I popped this question to my friend: how do we know that the testbench works as we intended or in other words how do we test the testbench?
It is once again that time of the year when I panic. What gifts, for whom, do they even need them, what if I don’t, etc. I am naturally always on the lookout for some good advice and so I asked my friend to share some wisdom. “I give everybody the gift of coverage of course”, he replied slightly raising his eyebrow as if to remind me to refrain from asking the obvious. “Everybody needs it even if they don’t know it yet, and if they don’t get it, there will always be a hole in their lives.” I nodded in thoughtful agreement while at the same wondering whether the gifts of coverage end up piling up in the equivalent of my garage somewhere, or whether people have the patience to go through the many levels of wrapping to extract its true value. I was still thinking about this when I entered the UPS office where I had to pick up an undelivered package and I somewhat irritated noticed that there was a line at the counter. Of course, it is December and shipping companies are in high gear, kind of like computers in the month before tape-out. They need to collect all the packages, sort them and then distribute them to the appropriate destination. Similarly, coverage data needs to be collected, merged, processed and reports sent to the appropriate stakeholders. Just like processing the incoming packages needs to be done simultaneously at multiple locations, coverage data from the many regressions runs needs to be merged in parallel to prevent the entire verification process to slow down to a snail’s speed (no punt to snail mail maliciously intended). So while the speed with which coverage data is produced retains its critical importance, the speed with which the coverage data is collected and merged should not be overlooked lest it becomes the frightening bottleneck. The key of course is to efficiently parallelize the process so this season when you share my friend’s gift, don’t forget to ask your shipping company woops I mean Verification vendor, how to do it in parallel!
Metric Driven Verification, Coverage Driven Verification, in one form or another everybody is talking about the same thing so I thought about asking my friend how he likes to be in the driver seat. With a sheepish grin he replied that it depends whether he gets to drive a Trabant or a BMW. Both German cars mind you, but… So that got me thinking. Lately, we all have been talking up methodologies, verification plans, intelligent testbenches, guiding metrics, assertion densities, etc., etc. Now these are all great topics and we definitely need to keep nurturing them however, one rather important detail seems to have slipped out of the conversation and that is the good ole’ workhorse of every verification flow, the simulator. For no matter how careful we design our testing strategies, how cleverly we place our coverage targets and how many assertions we sprinkle around the design, we still rely on the simulator to get us to our destination. Whether we define our destination as squashing every bug in sight or meeting ever shrinking delivery timelines, it will be the reliability and performance of the simulator that will save the day. From his great driver vantage point, my friend confessed that while having a GPS to tell him where he is, an intelligent radio to alert him on traffic jams, and an on-board computer to report instant gas mileage are all great assets, once he gets on the Autobahn and the pedal hits the metal, there is nothing like driving the fastest car on the road. So much for car advice.