Posted by frank schirrmeister on June 3, 2008
Well, Grant Martin gave me a very kind endorsement recently in his Blog “Taken for Granted”. Thanks! To give some background, besides the 300 days of sunshine here in California, Grant was actually involved in bringing me over here ten years ago. He was part of the Felix team at the time and I was hired in to help drive the initiative which eventually resulted in some cool products – VCC for Function Architecture Co-Design. Ten years too early, but the technology foundation was solid and the results were quite promising. For the EDA history enthusiasts, Bill Murray actually wrote very nice chapters on Cadence’s VCC and Synopsys Behavioral Compiler as trailblazers in “ESL Design and Verification – A Prescription for Electronic System Level Methodology”, which Grant co-authored.
But enough about history, I want to comment on Grant’s Blog entry “Which came first … the model or the tool?” and some of its reader’s inputs. Grant refers to the model availability as a classical “chicken and egg” issue. I agree, but the situation is getting better very rapidly, with three issues being important: (1) interface standardization, (2) model accuracy and (3) critical mass of models.
One of the readers, Gordon McGregor, comments:
“The models are never ready when you want them, usually because the companies are still designing the IP and either do their design first and create a model last, or wrote the model in some other language than the one you want to use, for a customer that uses a different interface or hooks up to a different sort of modeling tool.”
Gordon hits on the first key issue – the interfaces used for the IP. For virtual platforms we have experienced now a decade of proprietary interfaces to hook in models into the virtual platform. There were/are the three “V”s – Virtio, VaST and Virtutech, as well as AXYS (acquired by ARM) and CoWare. Together with “Roll Your Own” solutions (especially in big companies with large CAD teams), a variety of different APIs were around, all of which were meant to hook together system-level models.
Well, luckily this era is about to come to an end as we are approaching the official ratification of SystemC TLM-2.0. It marks an important step to provide an interoperable infrastructure, in which fast virtual platforms for pre-silicon embedded software development and verification can be provided to software developers and verification engineers. Not only do the actual interconnects find their standardization, but also capabilities like back door memory accesses to maintain appropriate execution speed – formerly implemented with proprietary techniques – are now available in a standard interoperable form (DMI – Direct Memory Interface). As a result DMI equipped processor models run equally fast in all OSCI TLM-2.0 compliant simulators.
So what does that mean for the users now? It’s all about the models! System-level simulation itself will commoditize and the tools, models and services around it are now the big differentiator. In our case the DesignWare® System-Level Library is already decoupled from our tools to create, integrate and deploy virtual platforms – and other vendors will follow. In addition we are already working full steam ahead on making the DesignWare System-Level Library fully TLM-2.0 compliant.
Taking an IP Providers perspective, the second key issue relates to model accuracy. Given the effort to create and validate system-level models, we clearly see a polarization of system-level modeling styles.
This figure charts different modeling styles across a plane of performance and accuracy. Models can be provided at a variety of accuracy levels. For a processor model these range from application views not actually executing a processor model, through various timed models to cycle accurate models. Ideally users would like to see even more models in-between. While for them this would be ideal, for IP Providers it is not economically feasible to provide a large number of models representing different levels of accuracy. Each model has to be developed and – more importantly – verified. And even
In the cycle accurate domain today the effort to develop and validate the models has sometimes reached the same order of magnitude as the implementation itself, while not providing full accuracy. As a result polarization of models happens. At one end of the spectrum users find instruction accurate models, but at the other end we see less and less fully cycle accurate modeling going on. Given that cycle accurate models are an absolute requirement, users rely on hardware prototypes, models directly derived from the RTL code or even the simulated RTL code itself. Now all this links back to standardization again. The model abstractions indicated in above figure should be sufficient to cover the most common use models.
The third issue relates to the number of models available to develop the modeling expertise and will likely lead to consolidation of IP vendors and modeling expertise. Recent market research from Semico shows that the average SOC today in 2008 already more than 33 IP blocks, of which about 50% are re-used. In 2012 we are looking at 72 blocks with a re-use rate of 60%. Well, similar trends will apply to the virtual platforms modeling the chip implementation. Vendors like us are already providing today the corresponding system-level models for most of the protocols like USB, SATA, Ethernet, PCI and DDR, which we provide as implementation IP to our customers. Part of the reason we can do that is that there is enough critical mass of models at the implementation level for which we system-level models can be created. It allows us to re-use general modeling infrastructure like our virtual-i/o capabilities to interface protocols like USB and Ethernet to the real world.
As Grant comments in his Blog entry:
“Let’s hope that eventually the issue of model availability will become a secondary or tertiary issue in the growth of ESL modeling and usage.”
It’s all about the models and with standardization enabling interoperability, the availability of the right interface abstractions and a critical mass of models being re-used the situation looks much more favorable now for model availability than it used to. I am looking forward to your thoughts and comments!
Patrick Sheridan is responsible for Synopsys' system-level solution for virtual prototyping. In addition to his responsibilities at Synopsys, from 2005 through 2011 he served as the Executive Director of the Open SystemC Initiative (now part of the Accellera Systems Initiative). Mr. Sheridan has 30 years of experience in the marketing and business development of high technology hardware and software products for Silicon Valley companies.
Malte Doerper is responsible for driving the software oriented virtual prototyping business at Synopsys. Today he is based in Mountain View, California. Malte also spent over 7 years in Tokyo, Japan, where he led the customer facing program management practice for the Synopsys system-level products. Malte has over 12 years’ experiences in all aspects of system-level design ranging from research, engineering, product management and business development. Malte joined Synopsys through the CoWare acquisition, before CoWare he worked as researcher at the Institute for Integrated Signal Processing Systems at the Aachen University of Technology, Germany.
Tom De Schutter
Tom De Schutter is responsible for driving the physical prototyping business at Synopsys. He joined Synopsys through the acquisition of CoWare where he was the product marketing manager for transaction-level models. Tom has over 10 years of experience in system-level design through different marketing and engineering roles. Before joining the marketing team he led the transaction-level modeling team at CoWare.