Posted by Alex Seibulescu on January 13th, 2011
If you’re nostalgically inclined like me, you probably fondly remember the times when testbenches were nothing but initial blocks with assignments and forces. Alas, those days are long gone. The serenity of the static pure Verilog testbenches has been replaced by the turbulence of the dynamic SystemVerilog ones where objects come and go as they please. Granted they brought revolutionary progress to usability, re-usability, scalability, portability, flexibility but let’s not forget that we paid for all these “bilities” with a significant increase in testbench complexity. Let’s face it, verification engineers of yore had to “just” understand the design in order to do a magnificent job at verifying it whereas their modern day counterparts have to acquire significant object oriented design skills as well as ramp up on VMM, UVM, OVM, etc. before they even attempt to write a single line of verification code. Add to that the fact that design sizes grow faster than the US National Debt and that testbenches can’t afford to be left behind and you’ll see why the other day I popped this question to my friend: how do we know that the testbench works as we intended or in other words how do we test the testbench?
As you’ve probably already guessed he had an answer handy and yes, it did involve collecting coverage. Just like in any other verification exercise, figuring out what has not been tested is of crucial importance and to find out which parts of the testbench have not been properly exercised can be identified by the right coverage measurements. Line coverage would be one way to go but because of the many possible but unused configurations in both internal and external VIPs, is most likely not the most practical. Conversely, capturing the interesting scenarios that the testbench is supposed to generate with some simple covergroups and making sure that they are appropriately covered will go a long way towards ensuring that the testbench is doing its primary job of generating comprehensive stimuli to the DUT. Low coverage in the DUT may also expose the same testbench problems but it would take a lot more time to analyze and get to its root cause so why not catch the problems where they’re easy to corner and identify? Besides, this exercise can be started even before the DUT is ready for showtime with all the benefits associated with that.
To wrap up, in addition to stimulus coverage, my friend recommends to make it part of your New Year’s resolution to add scenario coverage targets to your testbench. He predicts that just by doing this, 2011 will be a much better verification year than you’ve anticipated!