HOME    COMMUNITY    BLOGS & FORUMS    On Verification: A Software to Silicon Verification Blog
On Verification: A Software to Silicon Verification Blog
  • About

    Who knows what the future will bring! After implementing neural networks in analog CMOS for my MSEE at Ohio State, I moved to Japan to do digital ASIC design using the new VHDL language and fancy logic synthesis technology from a startup called Synopsys. This introduced me to the wonderful world of EDA, where I was able to explore lots of other cool new technologies from test automation at CrossCheck to FPGA synthesis at Exemplar to code coverage at TransEDA to testbench automation and methodology at Synopsys. Twenty years flew by in the blink of an eye!

    I am starting a new exploration around the bigger picture of what it takes to verify and validate increasingly complex designs on increasingly compressed schedules and budgets. This broad topic ranges from technology to economics, from embedded software development and architecture analysis to RTL and circuit design; from personal productivity to distributed team efficiency; from novel ideas to fundamental paradigm shifts; from historical perspectives to predictions of future requirements. Please join me and share your thoughts on verification!

    - Tom Borgstrom

Getting the last 20%

Posted by Tom Borgstrom on July 6th, 2010

I am happy to write that Nusym’s pioneering coverage convergence technology is now part of Synopsys.

Over the years, I’ve seen the “long pole” in verification schedules shift based on the evolution of verification technologies and chip architectures.  A few years ago one of the long poles was writing tests – it was nearly impossible to think of and write the tests required to verify all of the likely operational scenarios for a complex design.  Fortunately constrained-random verification with SystemVerilog emerged and made it much easier to automatically generate the thousands of tests needed.  Today it is not uncommon to go from 0% to 80% coverage in just a few days after the SystemVerilog testbench is up & running.

What about the remaining 20%?

Today, one of the long poles in verification is coverage convergence – the process where verification engineers analyze the coverage generated by constrained-random tests, identify gaps or “coverage holes”, and adjust the verification environment to try to fill the gaps.  If you think this sounds laborious, repetitive and time-consuming you’d be correct.  I’ve spoken to chip designers who say a third of their overall chip development schedule is spent in this iterative, largely manual, coverage convergence phase of verification.

Automating the coverage convergence process is one of the grand challenges in functional verification.  Closing coverage holes requires precise control over logic buried deep in a design using only external design signals.  This is tough enough for verification engineers – hence all the time spent on coverage convergence.  It can be even harder for a tool to automate completely.  VCS currently offers automated coverage convergence for inputs through its Echo technology.  But what about coverage that is deeply buried in a design, many sequential levels deep from the design inputs?

Nusym’s automated coverage convergence technology is targeted at just this challenge.  Nusym arguably invented the concept of automated coverage convergence back in 2004, and received glowing customer testimonials over the years (here, here, here and here).  Their technology shows great promise both in providing focused feedback needed to debug coverage issues, as well as in intelligently generating stimulus to target coverage holes anywhere in the design or testbench. I look forward to this technology bringing down the “long pole” of coverage convergence!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Coverage | 2 Comments »

Verification Peace, Love and Interoperability

Posted by Tom Borgstrom on November 2nd, 2009

This is going to be a pretty boring post, without much drama. But, I think that’s OK (once in a while). Let me explain.

In the world of national and international politics, sensational news of conflict often gets more media attention than stories of cooperation, collaboration and progress.   Much the same happens in the world of electronic design automation, albeit at a much smaller scale.  Editors and bloggers alike are drawn to controversy, like moths to a light, in an effort to get more readers or pump up circulation.  Verilog vs VHDL!  Vera vs. Specman! SystemVerilog vs. SystemC!  VMM vs OVM! Some readers are also drawn to this for the vicarious thrill of seeing their favorite company or technology face off against an opponent.  It’s hard not to get caught up in it! Sometimes these debates actually help drive progress and consensus, but very often they are based on a false argument and end up annoying chip developers who just want to get their design out.

Stories of cooperation and interoperability tend to get less airtime amongst the media, perhaps because it is expected that companies will just make things work.  In the developed world, nobody writes stories about how the lights turn on or the phone works or the water runs.  However, I’d say that the EDA industry is not quite as developed as the public infrastructure in advanced countries.  Complex chip development technologies created by independent, competing companies don’t “just work” together without consistent focused effort and significant involvement from end users.

Peace

This Thursday (November 5) in Santa Clara, Synopsys will be celebrating the progress made over the past year in EDA interoperability and standards at its 22nd EDA Interoperability Forum with the theme “Peace, Love and Interoperability”.  This all-day event, held at the Sun Conference Center at Agnes Historic Park, is open for both EDA tool developers and IP/chip developers.

The agenda includes quite a bit for verification-minded folks: learn the latest developments around SystemC TLM 2.0 for interoperable system-level models, the latest VMM methodology updates for interoperable verification environments, and the HapsTrak interface for open connectivity to FPGA-based rapid prototypes.  As an added bonus, the first 100 attendees will receive free copies of the VMM for Low Power book, and Doulos’ VMM Golden Reference Guide.

Registration is free, and breakfast and lunch is provided. I hope to see you there!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | Comments Off

Engineering an Elevator Pitch

Posted by Tom Borgstrom on October 26th, 2009

Do you have an elevator pitch for your IP, chip or system? Don’t say “I’m an engineer – pitching is marketing’s job!” Everyone developing technology products should be able to deliver their product’s elevator pitch.

What's Your Elevator Pitch?

Engineers have a strong tendency to quickly dive down into the technical minutiae of their product and lose the big picture. However, this detail often overwhelms and obscures the importance of their great new technology. Whether you develop IP, chips, electronic systems or even EDA software, the ability to effectively communicate the value and relevance of your product (or the feature you are working on) will help you stay connected to what is important for the product, your company and the customer.

An elevator pitch shouldn’t be long – less than a minute. It is a concise explanation of your product that you could give to your CEO (or your customer’s CEO or a potential investor’s CEO) should you find yourself next to him or her on an elevator for a few floors.

I’ll admit that in my 20 years in the semiconductor industry I haven’t often found myself in a position to give a pitch to a CEO who happened to be standing next to me in an elevator (lots of single-story buildings in Silicon Valley, I guess). But the process of crafting an elevator pitch is valuable in itself. It is a process to help crystallize in your mind the value proposition, target user and competitive differentiation of your product or feature – all very good things for engineers and marketeers alike to know.

Here’s my favorite way to quickly put together an elevator pitch, from the technology marketing classic Crossing the Chasm (I must be a real fan of Geoffrey Moore!). Just fill in the blanks and you’ll have a concise, compelling elevator pitch for your product or feature:

  • For (target customers)
  • Who (statement of the need or opportunity)
  • The (product name) is a (product category)
  • That (statement of key benefit – that is, compelling reason to buy)
  • Unlike (primary competitive alternative)
  • Our product (statement of primary differentiation)

Let’s try a few examples. Suppose we are Amdahl, a maker of plug-compatible clones of IBM mainframe, and let us say that our primary competitive alternative is Hitachi Data Systems. Our elevator message might be:


For Fortune 500 companies who are looking to cut costs and who operate in data centers of IBM mainframe computers, Amdahl’s computers are plug-compatible mainframes that match or surpass the equivalent IBM computers in features and performance, at a far more attractive price. Unlike the Hitachi line of computers, our products have been backed by the same service and support organization for over 20 years.


Crossing the Chasm, p. 161

Geoffrey Moore


What’s your elevator pitch?

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | Comments Off

High Level Synthesis and Verification?

Posted by Tom Borgstrom on October 12th, 2009

After a long break from On Verification, I’m back at it again now with today’s rollout of Synopsys’ latest synthesis tool – Synphony HLS. “Synthesis? I thought this blog was On Verification” you may ask. Let me explain.

As I’ve said before, software development and verification are the two biggest and fastest growing parts of the total cost of chip design. With chip development cost approaching $100m on advanced process nodes, any technology that improves productivity of software and verification teams can have a huge payback. High level synthesis (HLS) is one such technology.

In general, HLS raises the level of abstraction for design, enabling verification to also be done at a higher level and reducing downstream verification effort. With “correct-by-construction” generation of RTL and other views, more verification effort can be applied at higher levels without having to be repeated for RTL. This means designers can quickly explore many architectures (i.e. build the right design), write fewer lines of synthesizable code (i.e. code fewer bugs) and rapidly simulate complex functionality (i.e. complete verification sooner). So, HLS can be an important technology for chip developers looking to manage growing verification costs.

So, how does Synphony HLS fit into the world of high level synthesis? First of all, Synphony HLS is targeted at communications and multimedia chips with high algorithmic content – a large and growing market. Designers of these chips typically do algorithm development in the M-language using untimed, floating-point code. To reach silicon these engineers have to manually re-code their algorithms into fixed-point architectures and then re-verify – a time-consuming and error-prone process. Synphony HLS changes all of that by automatically synthesizing floating-point M-code into fixed-point RTL that is optimized for power, performance, area, etc. based on user constraints. We all know that “one size” of RTL doesn’t fit all, so Synphony HLS generates RTL targeted at the intended use – ASIC synthesis, FPGA synthesis or FPGA-based rapid prototype. Synphony HLS also generates a C-level representation of the fixed-point algorithm that can be used in virtual platforms for early software development. There’s nothing else in the market today that does all of this.

So, while today’s announcement is about high level synthesis, it is really exciting news for verification as well!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in System Level Design & Verification | Comments Off

Verification Methodologies: Standards will come; in the mean time get that chip verified!

Posted by Tom Borgstrom on May 10th, 2009

Ed Sperling’s post at the System-Level Design Community revives a methodology topic that I think many verification teams are growing tired of and wish would get resolved. As Ed points out, the discussion of xxM verses yyM often devolves into a “religious” debate. This debate reflects the strong desire of verification teams to achieve higher productivity, but does little to help realize the goal.

With verification environments for complex chips often over a million lines of code, the case for higher productivity through verification reuse and interoperable VIP is strong. A key enabler on this front is to have a single, standard verification methodology and library. The last thing verification teams want to do is rewrite big chunks of their verification environment to fit in a different methodology, or to add layers of additional complexity to bind multiple disparate methodologies in a “franken-vironment”.

The ultimate user goal – a single, standard verification methodology and library – must be driven by an independent organization with an open and transparent process. The good news is that the verification community has recognized this and is well on its way to achieve this goal. For the past year or so, a technical working group at Accellera composed of chip designers and EDA vendors have been working furiously to develop standards for verification interoperability and ultimately a single, standard methodology. The process isn’t easy or fast, but progress is being made. Synopsys, for its part, has made significant contributions both in terms of technology and people to help support this process.

In the mean time, verification teams have to continue getting their chips out. Many take advantage of the productivity benefits of a vendor-supported methodology like the VMM, including advanced methodology applications, extensions into new domains, focused R&D investment and a worldwide support infrastructure.

Standards will come. In the mean time, get that chip verified!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Methodology | Comments Off

System Prototypes: Virtual, Hardware or Hybrid?

Posted by Tom Borgstrom on May 5th, 2009

If you are going to DAC this year, you may be interested in attending the panel System Prototypes: Virtual, Hardware or Hybrid that I’ve been organizing with Eshel Haritan of CoWare. We’ve got a great lineup of panelists from Amicus Wireless, Synopsys, Qualcomm, ST-Ericsson, LSI and CoWare who will debate the pros and cons of SystemC TLM-2.0 based virtual platforms, FPGA-based rapid prototypes or hybrids of the two for system prototyping. I’ve had a chance to peek at the panelist’s positions – all I can say now is that it promises to be a very lively discussion! Ron Wilson of EDN will moderate.

Tuesday July 28, 10:30am – 12:00pm; Room 102, Moscone Convention Center, San Francisco.

Technorati Profile

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in System Prototyping | Comments Off

Differentiation: It’s all about software…unless it’s about hardware

Posted by Tom Borgstrom on April 29th, 2009

So, you are a chip or system developer and you want to differentiate your next product. Do you implement key features in hardware or software?

For an increasing number of companies the answer seems obvious – hardware is a commodity and differentiation comes from software. It’s hard to argue with the results in some industries – over 50% gross margins on the software-centric iPhone compared to the 10% -30% gross margins on more hardware-centric feature phones. Platform chips can be customized with software to serve multiple end-products, software patches can fix bugs without hardware recalls or silicon respins, and firmware updates can extend the life of hardware platforms.

Industry data also seem to support this trend towards a bigger focus on software. According to IBS, over half of chip development costs are software-related at 65nm, with the percentage nearing 70% at 32 nm. IBS also reports that the semiconductor revenue/ R&D multiplier, once over 6:1 in favor of hardware R&D over SW R&D (i.e. you would get 6x more revenue investing R&D budget into HW compared to SW), is now less than 2:1, indicating an increasing ROI on SW R&D. The ITRS 2008 roadmap update further confirms “software as an integral part of semiconductor products, and software design productivity as a key driver of overall design productivity” adding that embedded software design “has emerged as the most critical challenge to SOC productivity”, requiring a “concurrent software development and debug infrastructure”.

Fortunately, a concurrent software development and debug infrastructure is emerging with the rise of high-performance system prototyping. System prototypes – combining SystemC TLM-2.0 based virtual platforms, FPGA-based rapid prototypes and related hardware & software IP – provide the performance, availability, fidelity and affordability to enable large embedded software development teams to start and often finish their work long before first silicon is available. This means that chip developers incorporating more software in their products can transition to volume production sooner.

But what about hardware? Will the design of customized silicon fade into oblivion with the growing importance of software? Don’t count on it. SoC continue to enable strong differentiation in terms of cost, size, power and performance. What we seem to be seeing is a more balanced approach to differentiating semiconductors and electronic systems, where hardware and software both play important roles in delivering value to the end-product in many markets. You can see this trend played out today at the maker of the iPhone, whose CEO once said that “phone of the future will be differentiated by software” but continues to invest in building a very strong chip development team –  presumably to differentiate future phones.

Differentiation – it’s all about software…unless it’s about hardware.

Technorati Profile

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in System Prototyping | Comments Off

Multicore Verification — Don’t Waste Your Cores!

Posted by Tom Borgstrom on April 6th, 2009

This weekend I found myself telling my 9-year old daughter to “Eat all of your beans — don’t waste your food!”  It has always bothered me to see perfectly good food get thrown away; these days it seems even more relevant. The same can be said about processor cores in verification.

About a year ago, Synopsys announced a corporate initiative to take advantage of multicore processor technology across its broad tool portfolio. On the verification front, the first tool out of the multicore gate was HSPICE, with impressive performance gains. What about Synopsys’ other verification tools?

Well, functional verification and FastSPICE tool users no longer have to suffer multicore-envy. Today, Synopsys announced its Discovery 2009 verification platform, including new VCS multicore technology and the new CustomSim unified circuit simulation solution with multicore capabilities.

This is a really big deal. Over the past few years, processor advances have come largely through multicore architectures rather than raw clock speeds, while the effort required for SoC verification has continued to grow exponentially. Improvements in single-core simulation algorithms, verification methodologies and use of compute farms have enabled large gains in verification productivity and will continue to do so. Multicore technology represents a fundamental and important new element of the verification toolkit, essential to taking advantage of the processing power available today and in the future.

So, with a nod to the new sense of thrift emerging in many households, you can do your part and take advantage of all of the cores in your workstation’s processor. “Use all of your cycles — don’t waste your cores!”

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Multicore Verification | Comments Off

Rapid Prototyping & Core vs. Context

Posted by Tom Borgstrom on February 9th, 2009

In the rush getting ready to head to the airport a couple of days ago, I grabbed a copy of the 2002 book Living on the Fault Line: Managing for Shareholder Value in Any Economy by Geoffrey Moore, one of my favorite business authors. These kind of books tend to get dated pretty quickly, but I wanted to browse through it to see if it still had any relevant insights given today’s economic slowdown. And, in the worst case, it could help me fall asleep on the long flight to Beijing!

The part that really caught my attention was Chapter 2: Core verses Context. The author defines core activities as any behavior that can raise your stock price; everything else is context. For these core activities the goal is to “differentiate as much as possible on any variable that impacts customers’ purchase decisions and to assign one’s best resources to that challenge.” The author goes on to describe different scenarios for outsourcing, partnering, contracting or making based on whether the activity is core or context, mission-critical or supporting. But, the basic ideas are pretty simple: (a) to maximize shareholder value, companies should engage (i.e. focus time, talent and management attention) on their core activities and disengage on context, and (b) one company’s context can be another company’s core.

So, what does all of this b-school theory have to do with verification? Today, Synopsys announced an expanded Confirma rapid prototyping platform. Rapid prototyping is an increasingly important part of SoC development, enabling early embedded software development and system validation. With Confirma you can, for the first time, get everything you need to create a rapid prototype from a single company – design partitioning, FPGA synthesis, high-performance prototype boards and fully-enclosed systems, expansion boards, transaction-based co-verification interfaces, and prototype debug technology. Plus a single point of contact for prototype training and support around the world.

This opens up some interesting opportunities for chip and system vendors looking to focus their attention on core activities. Because while designing and building high-performance prototype boards is very difficult (1700+ pin FPGAs, 20+ layer boards, high-speed interconnect, etc.) does it really differentiate your company? Should you have some of your best engineers designing & debugging prototype boards, or should this talent be focused on designing and verifying your next product to prepare for the inevitable market recovery?

For many companies, the answer is that prototype design has become context and the smart move is to work with a trusted partner for prototype development, enabling key engineering and management talent to engage on core activities. This type of transition repeats periodically in the EDA industry. It wasn’t too long ago that some semiconductor vendors actually wrote their own HDL simulators for in-house use; today most if not all HDL simulator development has been “outsourced” to EDA companies. In the end, buying simulators from EDA companies actually saved semiconductor vendors money by allowing them to engage their attention and resources on core activities.

Having the best, most complete rapid prototyping solution is absolutely core for Synopsys. We’ve got top R&D and management talent focused every day on designing the highest-performance, most reliable hardware; writing and testing the best & most integrated implementation and debug software; and working with customers from many industries to conceive and plan even better prototyping systems in the future. Your context is our core.

If you are interested in learning more about rapid prototyping, please sign up for one of our worldwide prototyping management seminars or technical workshops, which we kicked off today in China. Or drop me a line – I’d love to hear from you on verification!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in System Prototyping | 2 Comments »

Welcome to On Verification!

Posted by Tom Borgstrom on January 21st, 2009

I thought I would break the ice on my first blog post here and tell you a little bit about myself and my history in verification. (well, it was actually not my thought but rather a suggestion from fellow blogger Karen Bartleson over at The Standards Game , that helped me break my writer’s block and get started on this…Thanks Karen!)

I verified my first design way back in 1990 when I was an ASIC designer at Matsushita in Osaka. My chip was a video image processor / system controller for a “TV Door Phone ”, and we were the first team to use this new-fangled VHDL language, logic synthesis and HDL simulation (anyone remember the Very Slow Simulator? ;). Top-down design was very cool back then, and I was able to run the same simulations (I don’t think we called them testbenches yet) against high level models, RTL and netlists. Analysis was quite a chore; I remember having to tape multiple 3-meter long waveform printouts up on the wall to really understand what was happening throughout a complete video frame. I also recall very long nights as we approached our tapeout date, trying to complete timing verification of our scan chains and wondering if we had missed any errors in the functional waveforms. In the end we taped out not when verification was complete, but when my boss said “time’s up!”. Sound familiar? In the end, the gate array came back and worked, apart from about half of the scan chains with hold-time violations. We finished the entire design, from concept and tool selection to working first-silicon in a year. Top-down design was declared a success!

I then joined the EDA industry with a variety of startups, focusing on DFT, design services, FPGA synthesis, code coverage and functional verification until joining Synopsys five years ago to focus on testbench languages, tools and methodologies.

One of the interesting things I’ve observed over the past few years is how complex verification has become. Million line testbenches? Object-oriented programming? It seems like you have to be a software engineer to do verification anymore. But, without constrained-random testbenches, powerful debug environments and very fast simulators I can’t imagine how anyone could verify a modern chip – certainly not by taping waveforms on the wall and hoping you didn’t miss anything! Hope ” may be a mantra for the new Obama administration, but I don’t think it’s the best way to verify your design!

And it’s just not about digital simulation anymore. Chips have really become Systems on Chips, with mixed signal blocks, multiple embedded processors, 3rd party IP, lots of embedded software and globally-distributed design teams becoming the norm for most designs, rather than the exception. How to efficiently verify these super-complex chips and systems from both a technology/methodology perspective and an organizational/economic perspective is what I’m interested in now, and what I look forward to writing about in the coming months.

Until then, keep your eye On Verification!

  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | 2 Comments »