Unless you’ve completely isolated yourself from the wisdom of Verification and EDA pundits, you must have heard at some point that High Level Verification and High Level Synthesis are the way of the future. This has been true for at least the past 10 years and most likely is still going to be true for the next 10. The value is obviously there but there are a myriad of t’s to cross and i’s to dot until the moment some form of a high level flow will become a viable alternative to RTL verification. In the meantime we still need to find a way to make the current tried and true verification flows more efficient. So as I always do when I grapple with existential verification topics, I paid a visit to my friend Coverage and as always, he had an answer for me: High Level Coverage.
My friend Coverage turned into a revolutionary. We live in tumultuous times so this may not come as a surprise but seeing my friend pacing up and down the room, threatening imaginary adversaries was unnerving so I had to get to the bottom of it. What could possible turn my mild mannered friend into a raging firebrand? Once he settled down a bit, things started to clear up. Turns out that for a while now, people have been turning a blind eye on those last few percent of coverage holes as long as they did not belonged to the elite of cover targets, those that always need to be properly taken care of. “Even if there’s a bug in that area, we can always come up with a workaround or fix it in software” people would say, and who can blame them? Deadline pressures are not for the faint of heart and 2%-3% of obscure coverage targets are not going to stand in the way of tape-out bliss, right? Well, it turns out that with the relentless increase of design sizes and complexity and the worrisome shortening of the verification cycle, the number of second class coverage targets has swollen, their voice has become louder and they now threaten the verification establishment. It is ever more likely that continuing to ignore an increasing number of coverage holes will eventually lead to a silicon bug for which no quick ECO or software fix will be available and disaster will strike.
There are many of them these days. They used to be multi-millionaires but you know how these things go. Now, before you get too excited to find out the latest scoop on the yacht sailing, private jet flying, Davos skiing crowd, remember that I don’t typically hang out with Larry, Steve or Mark, I hang out with my friend Coverage. So this is not about the folks on the cover of Forbes Magazine, it’s about our favorite chips and the billion transistors they pack these days. More precisely it’s about verifying that a billion switches turning on and off, somehow harmoniously join forces to perform billions of instructions per second, transfer the proverbial Library of Congress across the country in minutes or make your picture on the iPad look better than the reality. How do we make sure that these ever growing billionaires do what they’re supposed to without collapsing under the enormous amount of verification data they generate?
These days it seems that you need to have a plan for everything. You need short, medium and long-term plans, you need backup and alternative plans, you need business and execution plans, you need a plan to eat (“I already have a dinner plan”), you need a plan to relax (“I’m working on my vacation plan”), and you even need a plan if you don’t want to do anything (“I plan to do nothing this afternoon”). Clearly, without plans, the world as we know it will cease to exist so I had to ask my friend Coverage, what his thoughts were on the whole planning business. To my surprise he got very agitated. “Planning and Coverage go hand in hand”, he said, “but they are not synonymous and yet people often use them interchangeably”. “Verification needs a plan, to do proper verification, you need coverage, to do proper coverage, you need to plan for it, and once you have the plan, you need to make sure you cover it, but coverage and the plan are not one and the same”. With that, he grumbled away, leaving me dazed and confused. I made a plan to revisit the topic once the dust of excessive wisdom crashing on my head settled.
My friend Coverage and I will be in the temporary center of the Universe a.k.a. Conversation Central Wed between 3-4pm, stop by and chat with us!
Posted in Uncategorized
The few of you who regularly read this blog, may have wondered what happened to my friend Coverage in the past 3 months. It turns out he has been on a worldwide quest to collect wisdom from verification experts near and far. The other day I bumped into him and he was clearly troubled. After some prodding he grudgingly admitted to his concerns. “People are amassing coverage data as if it was an inflation hedge”, he blurted. “Soon it will overwhelm them to the point where it will be difficult to extract useful information from it and they will start ignoring it”.
After a surprisingly long period of sunny skies, the clouds have returned to the Bay Area. Now, although some say that predicting when a chip will be ready for tape-out is akin to forecasting the next storm, I have not decided to subtly shift the topic of this blog to the exciting world of meteorology. Instead, I wanted to explore the nexus of coverage, the kind my friend is so enthused with, and cloud computing. If you haven’t yet heard about cloud computing, you’re probably reading this blog by accident but that’s ok, my friend and I are highly socially predisposed and we’re always happy to meet new people. There are many significantly better descriptions of what cloud computing really is but I will offer my own to keep things simple. According to my irrelevant opinion, cloud computing is a combination of hardware and software resources provided as a service to some end consumer. The cloud part comes from the fact that you don’t really have to know or care about where these services reside or come from, kind of like the clouds, you don’t know where they come from, where they’ll go next, or how high up they are, only whether they provide shade, rain, that sort of thing. In any case, cloud computing is part of our new reality, so I asked my friend whether he thinks there is any potential symbiosis between the lofty clouds and our daily challenges. The answer came with lightening speed and in a thunderous voice.
If you’re nostalgically inclined like me, you probably fondly remember the times when testbenches were nothing but initial blocks with assignments and forces. Alas, those days are long gone. The serenity of the static pure Verilog testbenches has been replaced by the turbulence of the dynamic SystemVerilog ones where objects come and go as they please. Granted they brought revolutionary progress to usability, re-usability, scalability, portability, flexibility but let’s not forget that we paid for all these “bilities” with a significant increase in testbench complexity. Let’s face it, verification engineers of yore had to “just” understand the design in order to do a magnificent job at verifying it whereas their modern day counterparts have to acquire significant object oriented design skills as well as ramp up on VMM, UVM, OVM, etc. before they even attempt to write a single line of verification code. Add to that the fact that design sizes grow faster than the US National Debt and that testbenches can’t afford to be left behind and you’ll see why the other day I popped this question to my friend: how do we know that the testbench works as we intended or in other words how do we test the testbench?