HOME    COMMUNITY    BLOGS & FORUMS    Future of Design
Future of Design
  • About

    My goal is to discuss advances in design methodology, particularly in the areas of low power design and raising the level of abstraction in design above the RTL level.

    - Mike Keating

State Space – Key to the Art of Good Design

Posted by mike keating on June 23rd, 2009

There is a conventional argument that complete verification is impossible. It goes like this: even a simple design of 100 flops has a state space of 2^100, which, simulating at a GHz, would take longer than the life of the universe to completely test. This argument raises some important points.

One key point is that verification is the hardest problem in chip design and in EDA. It is NP-complete, like many other problems in EDA, such as optimization. But it is the one problem for which we do not have heuristics that give us a “good enough” solution. Therefore we must keep the state space of a design as small as possible – the only practical way to manage NP complete problems.

Another key point is that a state space of 2^100 is clearly too large for any human to understand. So we are developing designs no one understands. This can’t be good!

In my experience, most designs can be refactored to reduce the state space by orders of magnitude. By parititioning the design well we can make the resulting state space much easier to understand. In fact, improving how we manage design state space is the key to improving how we do design and verification.

Chapter 4 of the Art of Good Design discusses this key issue of state space management.

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | 2 Comments »

Refactoring: Understanding Incomprehensible Code

Posted by mike keating on June 8th, 2009

Most of the code I read – and I read a lot of other people’s code – is utterly incomprensible at a first reading. In the last post, I described the major cause of this: the unstructured format of most RTL.

Chapter 3 of The Art of Good Design describes a project where I re-wrote (refactored) a large (28 page) module to make it simpler and easier to understand. The initial code was well-written and completely clear – to the engineer who created it. I was determined to understand its behavior without consulting the original author. Limiting myself to just reading the code, it took me weeks to figure out what the code really did. Even then, I had to restructure the code extensively to analyze it. My claim is that once I rewrote the code in a more structured fashion, the code became clear and obvious. I also claim that it is quantitatively much simpler – the state space is much, much smaller.

Here is Chapter 3.

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | 3 Comments »

From Chaos to Structured RTL

Posted by mike keating on May 5th, 2009

Much of the RTL that I see – especially older legacy RTL – is completely unstructured. It appears (at least at first glance) to be randomly placed combational and sequential processes – always @(*), assign, and always @(posedge clk) statements. Whatever structure is in the code is defined by the position of the statements and surrounding comments. This is exactly what assembly language code looks like. And we have 30 year + experience that tells us that such unstructured code is a disaster. Modern software languages provide a rich set of construct to facilitate structured code – functions, classes, structs, unions, etc.

With the introduction of SystemVerilog, many of these same tools are available for writing structured RTL. Now we need to start using them, and migrating away from the chaotic assembly-level coding of the past.

Here is Chapter 2 of the Art of Good Design. This chapter starts outlining the process by which we migrate to structured RTL. As always, comments are more than welcome!

Technorati Profile

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | 3 Comments »

Complexity of Design

Posted by mike keating on April 23rd, 2009

Designs today are complicated. Very complicated. From video codecs to PCI Express to set top boxes, we are dealing with extremely complex protocols and algorithms. The design and verification – especially the verification – of these systems are growing so complex that we have to question whether RTL is the right abstraction for this work.

I am currently working on an extensive write-up -someday to become a book – about this challenge. The first part focuses on how to measure complexity in design, and how to minimize it, within the constraints of the current synthesizable subset. The second part will focus on attempts to raise abstraction above the RTL level, and the successes and failures of attempts to do so.

Over then next few weeks, I will be posting parts of this write-up, one chapter at a time. The working title is: The Art of Good Design: Managing complexity in Billion Gate Chips. I welcome your comments and criticisms!

Chapter 1: Introduction (contains the requisite nod to Moore’s law and some basic concepts. The real action starts with chapter 2, to be posted in about 10 days.

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | 2 Comments »

Welcome to the Future of Design

Posted by mike keating on March 31st, 2009

I recently gave a talk at SNUG that captures the key topics I propose to blog about: How the challenges of low power, productivity,  software, and verification are going to change every aspect of design over the next few years. The video for the SNUG talk is available here.

My central thesis in the talk is this: For the last 30 years, semiconductor technology has given us a (nearly) free ride. By scaling the CMOS transistor and lowering Vdd, we have been continuously reducing the cost, improving the power (per MIPS), and increasing the performance of chips. This free ride is now over; scaling no longer provides clear benefits or a clear technical direction for the future. Instead, we are entering an era of innovation – and of limits. Disruptive semiconductor technologies – like hi-k dielectric and metal gates,  XUV, and FINFET transitors – may keep reducing the size of transistors, but not necessarily the cost. Innovative power management techniques such as power gating and Dynamic Voltage and Frequency Scaling have dramatically reduced power in SoC during a period when semiconductor technology has not. But it is not clear that there are new (low level) design techniques that will continue to reduce power significantly.

Instead, the biggest opportunities over the next 3-5 years will be in how we use the technology we have available – how we innovate at the RTL, architectural and system level. Over the next few weeks I will be expanding on some of the ideas introduced in the SNUG talk about how we can innovate in these areas. I welcome your comments, objections, and arguments on all these topics!

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon

Posted in Uncategorized | 3 Comments »