Posted by frank schirrmeister on June 10, 2008
Monday kicked of here in Anaheim with announcements of OSCI introducing the completion of the SystemC TLM-2.0 API standard. We at Synopsys sent out a release in parallel documenting our support.
At lunch we had panel called “Real World Advantages of the OSCI TLM-2.0 Standard for Model Interoperability and IP Reuse”. The moderator was Ron Wilson and his technical style served the discussion very, very well. On the panel we had four user representatives:
The panel format worked very well. We had no PowerPoint. Instead Ron was asking in front of about 180 attendees opening questions to give each of the panelists a chance to introduce their position. Ron opened the floor with the question on what impact TLM-2.0 will have on the panelist’s organizations and what its advantage is.
Tauseef Kazi from Qualcomm welcomed the TLM-2.0 release and mentioned that they “have been waiting” for it. The standards availability will make it easier for Qualcomm to convince their teams to use interoperable system-level models. They have plans to use it right away.
Prakash Rashinkar from Rambus introduced their memory related technology and emphasized Rambus’ need performance models to go with them. In the past customers demanded models using specific APIs. SystemC TLM-1.0 was not providing enough interoperability, which they hope to achieve now for their customers and themselves with TLM-2.0.
Ken Tallo from Intel confirmed that Intel is planning to use the standard. He sees two points of immediate impact. First, TLM-2.0 will help them manage complexity. With TLM-2.0 they can provide virtual platforms and enable their software developers to start pre-silicon software development earlier. The second issue is cost, for which Ken sees going forward TLM-2.0 models representing the soft IP they already buy from vendors and develop internally.
Adam Donlin from Xilinx sees as primary use model the architecture analysis. They also see cost as an important issue as these models are difficult to develop. He also indicated that standards like TLM-2.0 are necessary as removing volatility.
Ron then continued his opening questions asking what else will be needed and whether the memory mapped approach in SystemC TLM-2.0 will be sufficient.
Adam replied that he is not so concerned because memory mapped model meets their use model. The software focused use model of TLM-2.0 for virtual platforms makes a lot of sense to him. What he would like to see more leadership in is a better definition what the specific use models are for which TLM-2.0 users. In this context he asked for a performance analysis cook book.
Ken looks at TLM-2 as a layered framework. Depending on which users he talks to they need different accuracy levels. Specifically he differentiated between System Architects and Micro Architects, the latter requiring more detail.
Prakash is encouraged that this is a very good start given that older models had lots of performance issues. Depending on the customers they need different accuracy levels and TLM-2.0 provides the appropriate infrastructure for that.
Tauseef again pointed out the focus of TLM-2.0 on software development. At Qualcomm they also need some notion of cycle accuracy in their models for architecture analysis. For some of the functionality in special protocols he expects some extensions to be done. The debug interface is currently missing and extensions to the two passed approach will be necessary.
Ron continued with his closing opening question whether TLM-2.0 is really a platform for software development or whether there will be other things developed within the infrastructure as well.
Ken Tallo pointed out that we are just scratching on the concept of ESL. In a panel two year ago he had predicted that there would be more software companies at this DAC and while there are certainly more vendors, he sees the industry converging around TLM-2.0. The complexity of the hardware is getting more and more difficult to manage, and for the pre-silicon software development there is a lot to be gained from what we have with TLM-2.0 now. Further refinement can happen within this platform.
Prakash added that their customers have been quite happy with what they are getting from SystemC TLM-2.0. Prior to using SystemC they were using proprietary C++.
Finally Tauseef described that they had different SystemC model flavors in the past. The TLM-2.0 specification points out early on that it is best suited for more abstract development. He would like to see more detailed cycle accurate requirements in the future to be able to quantify it side by side with RTL. In reply Adam chimed in that they at Xilinx had started with cycle accurate model because they were least contentious. But there is just not enough simulation horsepower to answer all the questions at this level of abstraction as they showed in benchmarks at NASCUG. The communications interfaces provided by TLM-2.0 are nice, and what he would also like to see going forward is real tool interoperability.
Then we were up for questions from the audience. The first question was about model validation. The panel and what conformance validation techniques the industry might need going forward. The panel unanimously agreed that verification of models is a key issues, and that there even some compliance testing may be required. This may be a business opportunity! When asked what the compelling event would be for vendors to “retro-fit” their models to support SystemC TLM-2.0, Adam came back with my favorite response of the day: “It’s irresistible – think about the thousands of designers who will come out of school being trained on SystemC TLM-2.0”. Go Adam! The panel widely agreed that TLM-2.0 is definitely worthy of widespread adoption and that users should not wait for next versions.
There was some open discussion about how to add more accuracy to the models in TLM-2.0. Ken came back with the comment that the “software guys are ecstatic about getting anything prior to silicon”. They do not like hardware and he sees it likely that virtual platforms are still used after hardware is available. He agreed that there are some applications for which timing accuracy is required but questioned whether they can provide it. In many cases gross cycle accuracy is enough. Adam commented that they are trying to cross calibrate with as many places as they can. The question to him is how well this will be supported in the coding style and the programming models. He definitely see s in the future a three phased protocol required.
Mark Burton from GreenSoCs, who had earlier that day effectively retired their “GreenBus” in lieu of supporting TLM-2.0, commented from the audience about the release of TLM-2.0. He stated that we are perhaps “watching the birth of ESL and perhaps are not making enough fanfare about it.” He added that the results “effectively have pleased everybody in the community” and that the download kit contains “fantastic improvements compared to the draft release”. He asked the panel on their recommendation how to adopt TLM-2.0. Should it be the hardware or the software engineer?
While Ken was very clear that the hardware engineer should write the models, Prakash mentioned that at Rambus they have designers with backgrounds in both hardware and software. Adam added in that the models should come from vendors which in turn brought up a question on customer interaction. Prakash mentioned that models can be used for early specification of behavior and customer interaction, they sometimes even allow customers to participate in early specification. For Tauseef and Ken the majority of customers are in-house users and Adam explained that for Xilinx use of models is mixed between internal and external users.
In summary this panel was a great snapshot of what users think. Yes, there will be enhancements in some areas like debug interfaces, but everybody agreed that SystemC TLM-2.0 is a giant step forward and ready for adoption.
Patrick Sheridan is responsible for Synopsys' system-level solution for virtual prototyping. In addition to his responsibilities at Synopsys, from 2005 through 2011 he served as the Executive Director of the Open SystemC Initiative (now part of the Accellera Systems Initiative). Mr. Sheridan has 30 years of experience in the marketing and business development of high technology hardware and software products for Silicon Valley companies.
Malte Doerper is responsible for driving the software oriented virtual prototyping business at Synopsys. Today he is based in Mountain View, California. Malte also spent over 7 years in Tokyo, Japan, where he led the customer facing program management practice for the Synopsys system-level products. Malte has over 12 years’ experiences in all aspects of system-level design ranging from research, engineering, product management and business development. Malte joined Synopsys through the CoWare acquisition, before CoWare he worked as researcher at the Institute for Integrated Signal Processing Systems at the Aachen University of Technology, Germany.
Tom De Schutter
Tom De Schutter is responsible for driving the physical prototyping business at Synopsys. He joined Synopsys through the acquisition of CoWare where he was the product marketing manager for transaction-level models. Tom has over 10 years of experience in system-level design through different marketing and engineering roles. Before joining the marketing team he led the transaction-level modeling team at CoWare.