“Say what??” my friend shot back when I asked him if he heard about the latest trend in coverage modeling called CDCM or Convergence Driven Coverage Modeling. I thoroughly enjoyed myself watching him scour the most obscure corners of his memory in search of something resembling my question. After all there weren’t too many coverage related topics he wasn’t aware of, let alone “the latest trend” in coverage models. Of course he had no chance to find anything as I had just made up the term to throw him off his usual cool. However, once I started explaining what I had in mind, he got really enthusiastic.
Unless you’ve completely isolated yourself from the wisdom of Verification and EDA pundits, you must have heard at some point that High Level Verification and High Level Synthesis are the way of the future. This has been true for at least the past 10 years and most likely is still going to be true for the next 10. The value is obviously there but there are a myriad of t’s to cross and i’s to dot until the moment some form of a high level flow will become a viable alternative to RTL verification. In the meantime we still need to find a way to make the current tried and true verification flows more efficient. So as I always do when I grapple with existential verification topics, I paid a visit to my friend Coverage and as always, he had an answer for me: High Level Coverage.
There are many of them these days. They used to be multi-millionaires but you know how these things go. Now, before you get too excited to find out the latest scoop on the yacht sailing, private jet flying, Davos skiing crowd, remember that I don’t typically hang out with Larry, Steve or Mark, I hang out with my friend Coverage. So this is not about the folks on the cover of Forbes Magazine, it’s about our favorite chips and the billion transistors they pack these days. More precisely it’s about verifying that a billion switches turning on and off, somehow harmoniously join forces to perform billions of instructions per second, transfer the proverbial Library of Congress across the country in minutes or make your picture on the iPad look better than the reality. How do we make sure that these ever growing billionaires do what they’re supposed to without collapsing under the enormous amount of verification data they generate?
These days it seems that you need to have a plan for everything. You need short, medium and long-term plans, you need backup and alternative plans, you need business and execution plans, you need a plan to eat (“I already have a dinner plan”), you need a plan to relax (“I’m working on my vacation plan”), and you even need a plan if you don’t want to do anything (“I plan to do nothing this afternoon”). Clearly, without plans, the world as we know it will cease to exist so I had to ask my friend Coverage, what his thoughts were on the whole planning business. To my surprise he got very agitated. “Planning and Coverage go hand in hand”, he said, “but they are not synonymous and yet people often use them interchangeably”. “Verification needs a plan, to do proper verification, you need coverage, to do proper coverage, you need to plan for it, and once you have the plan, you need to make sure you cover it, but coverage and the plan are not one and the same”. With that, he grumbled away, leaving me dazed and confused. I made a plan to revisit the topic once the dust of excessive wisdom crashing on my head settled.
After a surprisingly long period of sunny skies, the clouds have returned to the Bay Area. Now, although some say that predicting when a chip will be ready for tape-out is akin to forecasting the next storm, I have not decided to subtly shift the topic of this blog to the exciting world of meteorology. Instead, I wanted to explore the nexus of coverage, the kind my friend is so enthused with, and cloud computing. If you haven’t yet heard about cloud computing, you’re probably reading this blog by accident but that’s ok, my friend and I are highly socially predisposed and we’re always happy to meet new people. There are many significantly better descriptions of what cloud computing really is but I will offer my own to keep things simple. According to my irrelevant opinion, cloud computing is a combination of hardware and software resources provided as a service to some end consumer. The cloud part comes from the fact that you don’t really have to know or care about where these services reside or come from, kind of like the clouds, you don’t know where they come from, where they’ll go next, or how high up they are, only whether they provide shade, rain, that sort of thing. In any case, cloud computing is part of our new reality, so I asked my friend whether he thinks there is any potential symbiosis between the lofty clouds and our daily challenges. The answer came with lightening speed and in a thunderous voice.
If you’re nostalgically inclined like me, you probably fondly remember the times when testbenches were nothing but initial blocks with assignments and forces. Alas, those days are long gone. The serenity of the static pure Verilog testbenches has been replaced by the turbulence of the dynamic SystemVerilog ones where objects come and go as they please. Granted they brought revolutionary progress to usability, re-usability, scalability, portability, flexibility but let’s not forget that we paid for all these “bilities” with a significant increase in testbench complexity. Let’s face it, verification engineers of yore had to “just” understand the design in order to do a magnificent job at verifying it whereas their modern day counterparts have to acquire significant object oriented design skills as well as ramp up on VMM, UVM, OVM, etc. before they even attempt to write a single line of verification code. Add to that the fact that design sizes grow faster than the US National Debt and that testbenches can’t afford to be left behind and you’ll see why the other day I popped this question to my friend: how do we know that the testbench works as we intended or in other words how do we test the testbench?
It is once again that time of the year when I panic. What gifts, for whom, do they even need them, what if I don’t, etc. I am naturally always on the lookout for some good advice and so I asked my friend to share some wisdom. “I give everybody the gift of coverage of course”, he replied slightly raising his eyebrow as if to remind me to refrain from asking the obvious. “Everybody needs it even if they don’t know it yet, and if they don’t get it, there will always be a hole in their lives.” I nodded in thoughtful agreement while at the same wondering whether the gifts of coverage end up piling up in the equivalent of my garage somewhere, or whether people have the patience to go through the many levels of wrapping to extract its true value. I was still thinking about this when I entered the UPS office where I had to pick up an undelivered package and I somewhat irritated noticed that there was a line at the counter. Of course, it is December and shipping companies are in high gear, kind of like computers in the month before tape-out. They need to collect all the packages, sort them and then distribute them to the appropriate destination. Similarly, coverage data needs to be collected, merged, processed and reports sent to the appropriate stakeholders. Just like processing the incoming packages needs to be done simultaneously at multiple locations, coverage data from the many regressions runs needs to be merged in parallel to prevent the entire verification process to slow down to a snail’s speed (no punt to snail mail maliciously intended). So while the speed with which coverage data is produced retains its critical importance, the speed with which the coverage data is collected and merged should not be overlooked lest it becomes the frightening bottleneck. The key of course is to efficiently parallelize the process so this season when you share my friend’s gift, don’t forget to ask your shipping company woops I mean Verification vendor, how to do it in parallel!
The other day I listened in while my friend Coverage told the neighborhood kids the Thanksgiving story. It went something like this. Once upon a time the verification engineers in the Old World design houses were not free to practice verification the way they wanted. Instead, they had to follow a strict directed test methodology imposed to them by long standing traditions. One day a group of them gathered a few of their test benches, boarded a chartered bus they called “Randflower” and left to join a new company where they could practice their own verification methodology rooted in the principles of freedom. From then on verification engineers would be free to write coverage targets based on verification plans written in a language that everybody could understand and even testbench variables would be free to take whatever values they wanted as long as they were following some constraints. However, the road to success was not to be an easy one. They had to work long hours, come up with new methodologies and tools to take advantage of them. The long nights and weekends of hard work soon started taking their toll. Many of them quit and moved to the software industry and those who stayed turned weak with frustration and exhaustion. But then one day a miracle happened. The indigenous designers in the company who had been working there for many years gathered together and shared their knowledge about the design with the verification engineers! This gave them the strength required to close on the remaining coverage targets and finally the fruits of their hard work were ready to be harvested. The verification engineers had a huge Tape-Out Feast where all the designers were invited to jointly celebrate the Turnkey Verification Environment they developed and thank their design counterparts for their invaluable help in their hour of need. They vowed that from then on they will write testbenches that can be easily shared among many verification groups and that they will share methodologies and best-practices for the benefit of the entire verification community.
Halloween and Elections just zoomed by so naturally they were the focal point of the conversation I had the other day with my friend Coverage. First, I asked him whether he applies his expertise to teach kids how to strategize the candy collection process. I figured trick-or-treating has enough similarities with hunting for bugs: the candies are the bugs, the kids are the verification engineers (pardon the analogy) and the kids want to collect as many candies as possible. Since not everybody in the neighborhood may be inclined to treat, the kids need to come up with some good coverage points to make sure they hit the houses that are most generous with their candy. Pretty straightforward I thought however, the frown on my friend’s face was a clear indication that I didn’t exactly hit the nail on the head. “Verification”, he said “is a marathon, not a sprint, it’s not about going out one night and hitting the bug jackpot”. He continued to explain that it is more like Halloween lasted for many months and although the goal was still to collect as many candies as you can before the time is up, scaling the process down to a couple of hours won’t paint the correct picture nor will it teach us what paint brushes to use. Ideally, he reasoned, one would design and apply coverage based candy collection in such a way that it not only maximizes the total quantity of candies but also provides a relatively constant stream of candies over the life of the collection process. Bug-overload and sugar-overload are nasty beasts to tame so there’s no need to rush the coverage driven attack plan. When to start looking at coverage is something that is sadly often overlooked but important nevertheless in the quest for an efficient verification strategy. After all, knocking on a few neighbor’s doors will provide enough candies to get your kids started and similarly, finding the first bug load does not require collecting coverage numbers.