“Say what??” my friend shot back when I asked him if he heard about the latest trend in coverage modeling called CDCM or Convergence Driven Coverage Modeling. I thoroughly enjoyed myself watching him scour the most obscure corners of his memory in search of something resembling my question. After all there weren’t too many coverage related topics he wasn’t aware of, let alone “the latest trend” in coverage models. Of course he had no chance to find anything as I had just made up the term to throw him off his usual cool. However, once I started explaining what I had in mind, he got really enthusiastic.
Unless you’ve completely isolated yourself from the wisdom of Verification and EDA pundits, you must have heard at some point that High Level Verification and High Level Synthesis are the way of the future. This has been true for at least the past 10 years and most likely is still going to be true for the next 10. The value is obviously there but there are a myriad of t’s to cross and i’s to dot until the moment some form of a high level flow will become a viable alternative to RTL verification. In the meantime we still need to find a way to make the current tried and true verification flows more efficient. So as I always do when I grapple with existential verification topics, I paid a visit to my friend Coverage and as always, he had an answer for me: High Level Coverage.
My friend Coverage turned into a revolutionary. We live in tumultuous times so this may not come as a surprise but seeing my friend pacing up and down the room, threatening imaginary adversaries was unnerving so I had to get to the bottom of it. What could possible turn my mild mannered friend into a raging firebrand? Once he settled down a bit, things started to clear up. Turns out that for a while now, people have been turning a blind eye on those last few percent of coverage holes as long as they did not belonged to the elite of cover targets, those that always need to be properly taken care of. “Even if there’s a bug in that area, we can always come up with a workaround or fix it in software” people would say, and who can blame them? Deadline pressures are not for the faint of heart and 2%-3% of obscure coverage targets are not going to stand in the way of tape-out bliss, right? Well, it turns out that with the relentless increase of design sizes and complexity and the worrisome shortening of the verification cycle, the number of second class coverage targets has swollen, their voice has become louder and they now threaten the verification establishment. It is ever more likely that continuing to ignore an increasing number of coverage holes will eventually lead to a silicon bug for which no quick ECO or software fix will be available and disaster will strike.
These days it seems that you need to have a plan for everything. You need short, medium and long-term plans, you need backup and alternative plans, you need business and execution plans, you need a plan to eat (“I already have a dinner plan”), you need a plan to relax (“I’m working on my vacation plan”), and you even need a plan if you don’t want to do anything (“I plan to do nothing this afternoon”). Clearly, without plans, the world as we know it will cease to exist so I had to ask my friend Coverage, what his thoughts were on the whole planning business. To my surprise he got very agitated. “Planning and Coverage go hand in hand”, he said, “but they are not synonymous and yet people often use them interchangeably”. “Verification needs a plan, to do proper verification, you need coverage, to do proper coverage, you need to plan for it, and once you have the plan, you need to make sure you cover it, but coverage and the plan are not one and the same”. With that, he grumbled away, leaving me dazed and confused. I made a plan to revisit the topic once the dust of excessive wisdom crashing on my head settled.
The few of you who regularly read this blog, may have wondered what happened to my friend Coverage in the past 3 months. It turns out he has been on a worldwide quest to collect wisdom from verification experts near and far. The other day I bumped into him and he was clearly troubled. After some prodding he grudgingly admitted to his concerns. “People are amassing coverage data as if it was an inflation hedge”, he blurted. “Soon it will overwhelm them to the point where it will be difficult to extract useful information from it and they will start ignoring it”.
If you’re nostalgically inclined like me, you probably fondly remember the times when testbenches were nothing but initial blocks with assignments and forces. Alas, those days are long gone. The serenity of the static pure Verilog testbenches has been replaced by the turbulence of the dynamic SystemVerilog ones where objects come and go as they please. Granted they brought revolutionary progress to usability, re-usability, scalability, portability, flexibility but let’s not forget that we paid for all these “bilities” with a significant increase in testbench complexity. Let’s face it, verification engineers of yore had to “just” understand the design in order to do a magnificent job at verifying it whereas their modern day counterparts have to acquire significant object oriented design skills as well as ramp up on VMM, UVM, OVM, etc. before they even attempt to write a single line of verification code. Add to that the fact that design sizes grow faster than the US National Debt and that testbenches can’t afford to be left behind and you’ll see why the other day I popped this question to my friend: how do we know that the testbench works as we intended or in other words how do we test the testbench?
The other day I listened in while my friend Coverage told the neighborhood kids the Thanksgiving story. It went something like this. Once upon a time the verification engineers in the Old World design houses were not free to practice verification the way they wanted. Instead, they had to follow a strict directed test methodology imposed to them by long standing traditions. One day a group of them gathered a few of their test benches, boarded a chartered bus they called “Randflower” and left to join a new company where they could practice their own verification methodology rooted in the principles of freedom. From then on verification engineers would be free to write coverage targets based on verification plans written in a language that everybody could understand and even testbench variables would be free to take whatever values they wanted as long as they were following some constraints. However, the road to success was not to be an easy one. They had to work long hours, come up with new methodologies and tools to take advantage of them. The long nights and weekends of hard work soon started taking their toll. Many of them quit and moved to the software industry and those who stayed turned weak with frustration and exhaustion. But then one day a miracle happened. The indigenous designers in the company who had been working there for many years gathered together and shared their knowledge about the design with the verification engineers! This gave them the strength required to close on the remaining coverage targets and finally the fruits of their hard work were ready to be harvested. The verification engineers had a huge Tape-Out Feast where all the designers were invited to jointly celebrate the Turnkey Verification Environment they developed and thank their design counterparts for their invaluable help in their hour of need. They vowed that from then on they will write testbenches that can be easily shared among many verification groups and that they will share methodologies and best-practices for the benefit of the entire verification community.
Halloween and Elections just zoomed by so naturally they were the focal point of the conversation I had the other day with my friend Coverage. First, I asked him whether he applies his expertise to teach kids how to strategize the candy collection process. I figured trick-or-treating has enough similarities with hunting for bugs: the candies are the bugs, the kids are the verification engineers (pardon the analogy) and the kids want to collect as many candies as possible. Since not everybody in the neighborhood may be inclined to treat, the kids need to come up with some good coverage points to make sure they hit the houses that are most generous with their candy. Pretty straightforward I thought however, the frown on my friend’s face was a clear indication that I didn’t exactly hit the nail on the head. “Verification”, he said “is a marathon, not a sprint, it’s not about going out one night and hitting the bug jackpot”. He continued to explain that it is more like Halloween lasted for many months and although the goal was still to collect as many candies as you can before the time is up, scaling the process down to a couple of hours won’t paint the correct picture nor will it teach us what paint brushes to use. Ideally, he reasoned, one would design and apply coverage based candy collection in such a way that it not only maximizes the total quantity of candies but also provides a relatively constant stream of candies over the life of the collection process. Bug-overload and sugar-overload are nasty beasts to tame so there’s no need to rush the coverage driven attack plan. When to start looking at coverage is something that is sadly often overlooked but important nevertheless in the quest for an efficient verification strategy. After all, knocking on a few neighbor’s doors will provide enough candies to get your kids started and similarly, finding the first bug load does not require collecting coverage numbers.
When my friend first muttered these words, I thought he decided to leave the rewarding world of verification and become a political pundit. I was dismayed because I don’t like the term pundit, it rhymes with bandit. Ever wondered why we have verification gurus and they have political pundits? In any case, it turned out it was about a far less controversial kind of stimulus package, the one you feed to your design to tickle its inputs and make its transistors go crazy. Phew!
You may remember the suggestive metaphor of a butterfly flapping its wings in California and causing a tornado on the other side of the globe. No, I am not attempting to apply chaos theory to coverage (although… ;-)), I am merely picking up where I left off last time and make a case that the various parts that collectively define the coverage problem are tightly connected and that the decisions we take for each part will inevitably have a profound effect on the others. Let’s take a look at some of these connections and the potential pitfalls if we ignore them.