Posted by Achim Nohl on January 27, 2012
In this post, I would like to share some perspectives on the transaction-level models needed for the creation of virtual prototypes. Just recently, TLMCentral kicked off a contest seeking the “best” model for a mobile phone sensor device. What actually makes a “good” model? The most ad-hoc answer to this question has historically been that the best model is the fastest and most accurate. However, if you ask about the reason for this, you probably won’t get a very precise answer. What does “accurate” mean? What is the definition of “fast”? Will an accurate and fast model be available early enough to start software development before hardware is available and therefore meet time-to-market demands?
Simulation performance and accuracy (timing and function) are for sure valid aspects. But, there is much more to consider. The opportunity for the “best” model arises when user requirements are understood correctly. When this happens, model creation can be greatly simplified and sped up. The model will have more value for the end-user at an even lower cost.
A good model, in my eyes, is one that serves the purpose of the user at the right point, on time at a proper cost. Here, the user task has to be carefully reviewed before selecting or creating a TLM model. What is the user intending to accomplish with the model? Simply identifying that a model should be used for software development is not the right level of diversification we are looking for. It does not constrain the requirements! Thus, we would again need a model that has to be able to do everything. While this is a challenge that often has to be tackled by TLM model providers, it is not something that you want to deal with when enabling a certain class of the end-users in your project.
In the remainder of the post I want to provide some software perspectives on models. In context of those perspectives I would like to introduce the aspects that make a model valuable.
The mobile phone application developer uses models in context of emulators being a backend to SDKs. This class of users demands that the models executes at speeds closer to the real device. For interface IP models, it is necessary for the model to be connected to the real world. E.g. the develop can use his/her real “iPhone” to drive the sensor model in the virtual prototype. S/He values the fact that he is able to record and repeat the stimuli for testing. It sounds complex, but doesn’t need to be. An application developer will not look underneath the APIs he is using which leaves great room for abstraction/simplification. In the context of Android, a sensor model may be a combination of a simple middleware along with a generic interface model (e.g. a UART) that is used to send and receive sensor data. This is close to the way it is realized in the Qemu Android emulator.
However, if the focus is middleware-, and not application-development, then this level of abstraction does not hold true anymore. Still, we typically have fewer requirements on simulation performance. The scenarios in which we are using the model are mainly in context of short embedded directed tests. E.g. For compliance testing, Android provides small sensor test applications for developers that are creating their own sensor middleware. The middleware is interfacing with the driver of the OS. Thus, our model should be accurate up to the level of the driver’s user interfaces. This implies that the coarse grain states of the IP are modeled correctly as those are typically reflected in how the driver is operated. E.g. the sensor is activated and configured to provide updated gyroscope values every 500ms via the file “/sys/class/sensors/gyroscope”.
To enable this, it is not required to realize such a model register accurate or register complete in the first place. Only a subset of the driver needs to be realized and functions such as clock, reset, voltage control can be neglected to simplify the modeling. The middleware developer will get more value through the additional debug and tracing capabilities we can put into the model. A good model will assist the developer and inform him/her about the coarse grain operation. Why are things happening in the model, or why are things not happening? This level of visibility can be of great help for understanding the IP and debugging the embedded software.
Someone who develops drivers or brings up a device in context of a board configuration will need more functional completeness and accuracy. Here, an accurate representation of the register map is almost mandatory. This also concerns functions for the power management integration such as clock, reset, and voltage sensitivity to bring-up and test for the Linux clock/voltage regulator frameworks. The model has to provide the proper response when configuring it through the register interface and also trigger interrupts. Therefore, a certain level of timing accuracy is required. However, this does not refer to having the data-path of the model being cycle- accurate/-approximate. The correct operation of the IP in the system context requires responses of the model at proper times. E.g. a timer will fire interrupts in periodic intervals. But again, model abstraction can be done to simplify the task. For the purpose of driver development or board bring up, the most important aspect is the control plane. Here, e.g. the data plane of a video accelerator IP can be abstracted to just show dummy data.
In summary, creating a model efficiently requires careful review of the end user design task. The model development can be staged and aligned with the software development plan, to incrementally enable new software development tasks. Understanding the requirements correctly allows creating models at lower cost and with higher end-user value.
Patrick Sheridan is responsible for Synopsys' system-level solution for virtual prototyping. In addition to his responsibilities at Synopsys, from 2005 through 2011 he served as the Executive Director of the Open SystemC Initiative (now part of the Accellera Systems Initiative). Mr. Sheridan has 30 years of experience in the marketing and business development of high technology hardware and software products for Silicon Valley companies.
Malte Doerper is responsible for driving the software oriented virtual prototyping business at Synopsys. Today he is based in Mountain View, California. Malte also spent over 7 years in Tokyo, Japan, where he led the customer facing program management practice for the Synopsys system-level products. Malte has over 12 years’ experiences in all aspects of system-level design ranging from research, engineering, product management and business development. Malte joined Synopsys through the CoWare acquisition, before CoWare he worked as researcher at the Institute for Integrated Signal Processing Systems at the Aachen University of Technology, Germany.
Tom De Schutter
Tom De Schutter is responsible for driving the physical prototyping business at Synopsys. He joined Synopsys through the acquisition of CoWare where he was the product marketing manager for transaction-level models. Tom has over 10 years of experience in system-level design through different marketing and engineering roles. Before joining the marketing team he led the transaction-level modeling team at CoWare.