It is once again that time of the year when I panic. What gifts, for whom, do they even need them, what if I don’t, etc. I am naturally always on the lookout for some good advice and so I asked my friend to share some wisdom. “I give everybody the gift of coverage of course”, he replied slightly raising his eyebrow as if to remind me to refrain from asking the obvious. “Everybody needs it even if they don’t know it yet, and if they don’t get it, there will always be a hole in their lives.” I nodded in thoughtful agreement while at the same wondering whether the gifts of coverage end up piling up in the equivalent of my garage somewhere, or whether people have the patience to go through the many levels of wrapping to extract its true value. I was still thinking about this when I entered the UPS office where I had to pick up an undelivered package and I somewhat irritated noticed that there was a line at the counter. Of course, it is December and shipping companies are in high gear, kind of like computers in the month before tape-out. They need to collect all the packages, sort them and then distribute them to the appropriate destination. Similarly, coverage data needs to be collected, merged, processed and reports sent to the appropriate stakeholders. Just like processing the incoming packages needs to be done simultaneously at multiple locations, coverage data from the many regressions runs needs to be merged in parallel to prevent the entire verification process to slow down to a snail’s speed (no punt to snail mail maliciously intended). So while the speed with which coverage data is produced retains its critical importance, the speed with which the coverage data is collected and merged should not be overlooked lest it becomes the frightening bottleneck. The key of course is to efficiently parallelize the process so this season when you share my friend’s gift, don’t forget to ask your shipping company woops I mean Verification vendor, how to do it in parallel!
The other day I listened in while my friend Coverage told the neighborhood kids the Thanksgiving story. It went something like this. Once upon a time the verification engineers in the Old World design houses were not free to practice verification the way they wanted. Instead, they had to follow a strict directed test methodology imposed to them by long standing traditions. One day a group of them gathered a few of their test benches, boarded a chartered bus they called “Randflower” and left to join a new company where they could practice their own verification methodology rooted in the principles of freedom. From then on verification engineers would be free to write coverage targets based on verification plans written in a language that everybody could understand and even testbench variables would be free to take whatever values they wanted as long as they were following some constraints. However, the road to success was not to be an easy one. They had to work long hours, come up with new methodologies and tools to take advantage of them. The long nights and weekends of hard work soon started taking their toll. Many of them quit and moved to the software industry and those who stayed turned weak with frustration and exhaustion. But then one day a miracle happened. The indigenous designers in the company who had been working there for many years gathered together and shared their knowledge about the design with the verification engineers! This gave them the strength required to close on the remaining coverage targets and finally the fruits of their hard work were ready to be harvested. The verification engineers had a huge Tape-Out Feast where all the designers were invited to jointly celebrate the Turnkey Verification Environment they developed and thank their design counterparts for their invaluable help in their hour of need. They vowed that from then on they will write testbenches that can be easily shared among many verification groups and that they will share methodologies and best-practices for the benefit of the entire verification community.
Halloween and Elections just zoomed by so naturally they were the focal point of the conversation I had the other day with my friend Coverage. First, I asked him whether he applies his expertise to teach kids how to strategize the candy collection process. I figured trick-or-treating has enough similarities with hunting for bugs: the candies are the bugs, the kids are the verification engineers (pardon the analogy) and the kids want to collect as many candies as possible. Since not everybody in the neighborhood may be inclined to treat, the kids need to come up with some good coverage points to make sure they hit the houses that are most generous with their candy. Pretty straightforward I thought however, the frown on my friend’s face was a clear indication that I didn’t exactly hit the nail on the head. “Verification”, he said “is a marathon, not a sprint, it’s not about going out one night and hitting the bug jackpot”. He continued to explain that it is more like Halloween lasted for many months and although the goal was still to collect as many candies as you can before the time is up, scaling the process down to a couple of hours won’t paint the correct picture nor will it teach us what paint brushes to use. Ideally, he reasoned, one would design and apply coverage based candy collection in such a way that it not only maximizes the total quantity of candies but also provides a relatively constant stream of candies over the life of the collection process. Bug-overload and sugar-overload are nasty beasts to tame so there’s no need to rush the coverage driven attack plan. When to start looking at coverage is something that is sadly often overlooked but important nevertheless in the quest for an efficient verification strategy. After all, knocking on a few neighbor’s doors will provide enough candies to get your kids started and similarly, finding the first bug load does not require collecting coverage numbers.
Metric Driven Verification, Coverage Driven Verification, in one form or another everybody is talking about the same thing so I thought about asking my friend how he likes to be in the driver seat. With a sheepish grin he replied that it depends whether he gets to drive a Trabant or a BMW. Both German cars mind you, but… So that got me thinking. Lately, we all have been talking up methodologies, verification plans, intelligent testbenches, guiding metrics, assertion densities, etc., etc. Now these are all great topics and we definitely need to keep nurturing them however, one rather important detail seems to have slipped out of the conversation and that is the good ole’ workhorse of every verification flow, the simulator. For no matter how careful we design our testing strategies, how cleverly we place our coverage targets and how many assertions we sprinkle around the design, we still rely on the simulator to get us to our destination. Whether we define our destination as squashing every bug in sight or meeting ever shrinking delivery timelines, it will be the reliability and performance of the simulator that will save the day. From his great driver vantage point, my friend confessed that while having a GPS to tell him where he is, an intelligent radio to alert him on traffic jams, and an on-board computer to report instant gas mileage are all great assets, once he gets on the Autobahn and the pedal hits the metal, there is nothing like driving the fastest car on the road. So much for car advice.
When my friend first muttered these words, I thought he decided to leave the rewarding world of verification and become a political pundit. I was dismayed because I don’t like the term pundit, it rhymes with bandit. Ever wondered why we have verification gurus and they have political pundits? In any case, it turned out it was about a far less controversial kind of stimulus package, the one you feed to your design to tickle its inputs and make its transistors go crazy. Phew!
You may remember the suggestive metaphor of a butterfly flapping its wings in California and causing a tornado on the other side of the globe. No, I am not attempting to apply chaos theory to coverage (although… ;-)), I am merely picking up where I left off last time and make a case that the various parts that collectively define the coverage problem are tightly connected and that the decisions we take for each part will inevitably have a profound effect on the others. Let’s take a look at some of these connections and the potential pitfalls if we ignore them.
I once had lunch with the CEO of a start-up and asked him what skills he thought I needed to acquire if I wanted to run my own company. He quickly replied I needed to understand Marketing. Now if you come from the Engineering side of our world as I do, your reaction is probably similar to mine at the time: “What?” I thought, “is getting ready for DAC really that important??”. In case this is indeed what you’re thinking, here are 2 must-reads: “Marketing High Technology” by William Davidow and “Crossing the Chasm” by Geoffrey Moore. It turns out that there is a lot more to Marketing than meets the ignorant eye. Among the many interesting concepts, the one that stuck with me is called in one way or another, “The Whole Product”. Its beauty lies, as it often does, in its simplicity, and goes something like this: real success comes from delivering to the market a Whole Product and not just a piece of technology. It is pretty common sense, you don’t just put out a chip or a piece of software and hope somebody will figure out what to do with them, you either build (or make sure somebody else does) complete systems around the chip or integrate your software within existing flows, you strike alliances with other software and/or hardware providers, you build a support infrastructure, delivery channels, prepare documents, trainings, etc. etc. You get the idea and if you don’t, try typing iPhone in your search bar.
Guess I should start with a disclaimer: I don’t know much about risk management. If I did, I would start an investment outfit call it “Golden Socks”. “Golden Socks” would capitalize on the universal dream of software ownership, help fuel a software bubble and get you to buy software you cannot afford. At the same time “Golden Socks” would bet that you won’t be able to pay maintenance for your software and have to give it back. Or something like that.
Coverage and I got acquainted fairly late in my life. This has nothing to do with me getting old and my dear wife outfitting me with Abercrombie gear to cover my age, it has to do with me joining a startup called Nusym back in 2008. That’s when Coverage and I became friends. Nusym pioneered the technology that lies at the root of what the good marketing folks call “Coverage Convergence Technology”, “Intelligent Testbench” and some other tempting names that make even the most cold blooded verification engineer purr with anticipation. Nusym’s exciting technology has in the meantime become part of the Synopsys family and I have followed it partly out of loyalty to my friend Coverage.