A View from the Top: A Virtual Prototyping Blog

 

Disruptive Ripple Effects From Implementation to Systems

The big topic these days seem to be the effects of 3D and silicon technology. Even though I am now more of a system-level guy, I do have full appreciation of technology effects given that for the first chip I developed, I had to design a three transistor memory cell which ended up in a FFT Chip for HDTV research. An interesting question I get asked more often these days is how the changes in semiconductor technology and assembly will impact the system level. My answer is: profoundly! How fast we will get there and how disruptive they will be, remains an open question to me.

image

When I developed the FFT chip mentioned earlier it was using Cadence Edge. Oops. I just gave away my age, did I? Needless to say, as indicated in the graph on the left, we did use RTL for verification only, had our own library of layout cells which we did assemble by hand based on gate-level schematics entered manually.

Later that decade I evaluated Logic Synthesis for the “Deutsche Telekom”. Great stuff, combining RTL and Gates and mapping from one to the other.

Well, even later that decade I arrived in the US, being very much involved in system-level already, but following closely the activities around what was at the time called “Physically Knowledgeable Synthesis”. Layout had been added to the mix and its effects were added into the logic synthesis process because the good old metrics of predicting the connections between blocks and gates had broken.

New decade, new challenge. Variability in manufacturing broke the good old flows and had to be considered as part of the equation. As commonality, in every step design predictability had been improved using characterization of lower-level technology effects.

So where are we today? Transaction-Level Models (TLM) still had been disconnected from the implementation process. That is, until last year, when we rolled out characterization of technology for low power all the way up into TLM models as part of the TSMC ESL Reference Flow. As a result the links from TLM based design to implementation are becoming tighter, predictability improves.

So back to the original premise – will technologies like 3D change design flows all the way up to the system-level? Absolutely! Are we ready from a technology perspective? Pretty much so. System-level tools helping with the “What If” decisions are pretty much agnostic to whether they deal with chips, chip-sets or systems. A good example are tools for Multicore Optimization. Their applicability goes well beyond the chip, they are used to make architecture decisions for chips, for chip-sets as well as for boards.

There is one caveat though, and it is a huge one. These tools need models to feed them and the models determine their applicability. Case in point, if the tools for Multicore Optimization are supposed to help with assessing the “what if” around 3D effects – for example how the partitioning of the memory amongst chips will impact performance – then appropriate models need to be available. Here is where the battle will be fought and the effort will have to be spent. Without models we will be lost.

Still, approaches like the TSMC ESL Reference Flow – which provides models of the technology all the way up into the level of TLMs – are a clear indicator that we are approaching the next level of integration, essentially creating predictability via characterization all the way up from TLMs to implementation. However, availability of models will determine when these approaches will become mainstream!

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS