BLOGS & FORUMS
A View from the Top: A Virtual Prototyping Blog
|A View from the Top: A Virtual Prototyping Blog|
Posted by patsheridan on February 29th, 2016
Power management. If you’re responsible for the design of low-power, energy-efficient electronic systems and SoCs, you need to have a power management strategy and you need to know as soon as possible if it will meet the demands of your product and its target applications.
For example, dynamic voltage and frequency scaling (DVFS) is a power management strategy that adjusts the frequency and voltage operating points of the system based on the application activity. System performance and dynamic power dissipation are managed to achieve the best balance. Today’s blog title, on the other hand, is a pun based on Run Fast and Stop, a strategy used when it’s more energy efficient, due to leakage power considerations at 65nm and below, to complete the task at hand as soon as possible. (You may feel the same way about reading blogs).
Although the benefits of power management are more obvious for battery-driven mobile consumer products, any device should consume only as much power as absolutely required to perform a certain task. The increasing number of CPUs with their high frequencies, the increasing size of LCD screens, cameras and the multitude of radios and sensors, are driving the total power consumption beyond what is acceptable by the user. Only proper management of the power states within these devices will minimize total power consumption.
How is virtual prototyping helping? The release of the new IEEE 1801-2015 UPF 3.0 standard is a big step forward. With UPF 3.0, system-level power modeling and analysis using power-aware virtual prototypes is enabling architects and system designers to define systems that yield the greatest benefit in terms of energy efficiency, months earlier, before hardware is available. The figure below illustrates how component level power models are added to a virtual prototype in the industry’s first architecture analysis tool to support UPF 3.0:
Figure 1: Adding UPF 3.0 System Level IP Power Models to a Synopsys Virtual Prototype
Each system-level IP power model is an abstraction of the power behavior of a component, providing a specification of its power states and the associated power consumption data for each state. Models based on the new UPF 3.0 standard enable interoperability across virtual prototyping use cases and vendor environments. These abstracted power models enable early analysis of system-level power budgets and can be refined as more specific implementation information becomes available.
Figure 2: Example UPF 3.0 System Level IP Power Model
Software plays a significant role in device power management at runtime, as it controls and drives the hardware components which actually consume power. Once the power management policies are defined and optimized by the architect, virtual prototypes for software development can accelerate the development and debug of high quality power management software at multiple levels – across the drivers, operating system, and up through to the application layer. And just like other software development tasks using virtual prototypes, the software team can work in parallel to the hardware team, enabling software bring-up to support power management features as soon as possible.
So what’s the next step for your power management strategy? If you want to hear more industry “Talk” (“Fast” or otherwise) on how virtual prototypes and the new IEEE 1801-2015 UPF 3.0 standard are enabling system-level power analysis earlier in the development cycle, just “Stop by” DVCon on Wednesday March 02, 8:30 a.m. – 9:30 a.m. at the Doubletree Hotel in San Jose, for the Redefining ESL panel discussion moderated by Semiconductor Engineering’s Brian Bailey.
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on February 16th, 2016
Wikipedia describes ADAS (advanced driver assistance systems) as systems developed to automate/adapt/enhance vehicle systems for safety and better driving. Safety features are designed to avoid collisions and accidents by offering technologies that alert the driver to potential problems, or to avoid collisions by implementing safeguards and taking over control of the vehicle. Adaptive features may automate lighting, provide adaptive cruise control, automate braking, incorporate GPS/ traffic warnings, connect to smartphones, alert driver to other cars or dangers, keep the driver in the correct lane, or show what is in blind spots.
It is a fast-growing industry that promises to save a lot of lives. The current state and expectations of the technology’s future had Volvo outline a vision that no one would be killed or injured in a new Volvo by 2020.
Since ADAS technology is built upon much of the same electronics and software foundation found in mobile and consumer devices, it is no surprise that Silicon Valley has become the epicenter of ADAS development. Google is known for its self-driving cars, with a stated mission to enable everyone to get around easily and safely, regardless of their ability to drive. Living and working in the center of Silicon Valley myself, I have witnessed Google’s self-driving cars multiple times. While you can definitely see a lot of the Google self-driving car prototypes, it seems like they won’t be available for purchase for a while. In the meantime, however, multiple car companies have been rolling out “intermediate” autonomous driver assistance systems. Tesla released its autopilot functionality in October 2015 and ever since, YouTube has been flooded with footage of Tesla owners trying out the technology for themselves.
I have experienced the advantages of ADAS technology with my 2015 Subaru Outback. While not as spectacular as the Tesla, Subaru’s EyeSight technology is absolutely great. Two cameras, one on the left and one of the right of my rearview mirror, alert me to objects that are too close to my car, but only if I am not braking yet, so it is much less obtrusive than what I have witnessed in other vehicles. Plus, the adaptive cruise control is very helpful, especially during long drives or in heavy traffic on the highway. The system allows me to let my feet rest, while the car does the braking and accelerating.
So how do all these different ADAS systems get developed and tested? Similar to many software-driven electronics products, the use of prototyping is at the heart of getting the system right. Both virtual and physical prototyping are used by semiconductor companies, tier-ones and OEMs. Safety is an integral element to these systems and carries with it massive testing requirements. Virtual prototyping provides a target for early software development and the ability to perform fault injection testing. Physical (or FPGA-based) prototyping enables software development and system validation in context of the real world interfaces.
Visit the Synopsys booth (4-360) at Embedded World from Feb. 23-25 to see our prototyping solutions and how they help develop and test ADAS systems. Demos of our VDK for NXP’s S32V200 ADAS SoC and HAPS-80 physical prototyping system running an embedded vision processor to detect speed signs will be shown.
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on December 23rd, 2015
There is something compelling about arriving at the end of the year and reviewing what happened during the year. In principle nothing is really different and a date is just a date, but we humans created this sense of time through well-defined boundaries of hours, days, months and years and a year-end boundary is an especially big deal. At the end of the year, we like to reflect upon the past year and make resolutions for the new year.
It has been a busy year for virtual prototyping. The Better Software. Faster! Best Practices in Virtual Prototyping book saw another round of printing due to its large popularity, well at least for the semiconductor market, and added a new case study by Kyocera Document Solutions. Chinese users embraced the value of virtual prototyping to achieve the fastest time to quality software. And hybrid prototyping and emulation solutions saw wider adoption in 2015.
As it is common to have top 5/10/100 lists at the end of the year (top 100 songs of the year, top 10 movies of the year…), I decided to do the same for virtual prototyping. And luckily enough we have just the thing through our Better Software. Faster! ebook download survey, which every person who downloads the book answers.
Drum roll for the results…
“What are your biggest software challenges?”
1. Software complexity
2. Late availability of hardware (very close second, almost made it to the top spot)
3. Changing requirements
4. Hardware complexity
5. Limited debug visibility
“What is the most important virtual prototyping benefit?”
1. Earlier software availability
2. Better software quality
3. Software bring-up and debug productivity gain
4. Tighter coordination between hardware and software team
“For which software stacks do you use virtual prototypes?”
5. Boot code
“What is the typical length of a project in your company?”
1. 7-12 months
2. 3-6 months
3. 13-18 months
4. 19-24 months
5. More than 4 years
The results highlight a couple of trends that we see in the industry—software complexity is growing, and project length is shortening. These two don’t go well together and explain why late hardware availability is viewed as such a problem for software developers. So more and more software developers are looking at virtual prototyping to pull in software development and achieve higher software quality, especially for hardware-dependent software stacks.
I expect software content and complexity to continue to grow in the coming years, and as such, there will be a growing need for prototyping to accelerate software development, hardware-software integration and system validation.
But for now, let’s take some time to relax, enjoy the final days of this year and celebrate the new year. Happy Holidays everyone!
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on December 4th, 2015
End-to-end prototyping to the rescue
In this blog I have been discussing the increasing impact of software on many aspects of our lives. In the past we mostly interacted with a software-driven device when we sat in front of a desktop computer. We now carry a device with us that is as powerful as a computer. Our cars track our moves and try to pre-empt an accident by warning us about rapidly approaching obstacles, or prevent our tires from slipping on wet or snow covered roads. In our homes, the thermostat reduces the temperature when it ‘notices’ that there is no longer movement in the house.
It should be no surprise to readers of this blog that I feel strongly about the value of prototyping to pull in the development of software and enable hardware–software co-design. Hardware and software no longer can be seen as two independent deliverables of the same end-product. The software needs to be developed in context of the hardware, and the hardware needs to be designed with the software in mind.
I have dedicated multiple blogs about specific types of prototyping methods to address certain design challenges. While each prototyping method offers a lot of value on its own, it is the combination of prototyping solutions that helps maximize the shift left of the design development. The result is much better products that can be delivered faster.
This is where an end-to-end prototyping strategy really pays off. As part of an electronics device design, you need to take care of the architecture design, software development and testing, hardware/software integration and system validation. All of these tasks rely heavily on software scenarios and hence all of them benefit from deploying prototyping.
In a recent webinar I talked with Chris Rommel, executive vice president of IoT & embedded technology at VDC Research, about some of the challenges originating from the ever-growing software content and complexity and the value of end-to-end prototyping as a solution to address these challenges. Chris explained how VDC’s research shows that software is now the largest single center of investment for electronics products and engineering challenges such as application complexity, technical obstacles and changes in the specifications are causing project delays. VDC’s research also showed a growing adoption of prototyping methods to address these schedule delays. VDC indicated that the trend toward end-to-end prototyping is driven by a need to get the software right in context of the hardware, and perform hardware-software co-design early on to address the growing complexity of software.
In the second half of the webinar, I explain Synopsys’ view on end-to-end prototyping. The mains goals are:
• Get the SoC architecture right.
• Achieve the shortest time to quality software.
• Reduced the schedule risk with pre-silicon software bring-up.
• Validate the hardware and software early on in context of real world conditions.
End-to-end prototyping is more than the sum of the individual prototyping solutions put together. Leveraging models, software and real world I/O interfaces across the development spectrum provides unique value links that boost the overall productivity by more than just time-to-market, but also in better product design where software and hardware work together in harmony rather than being overlay pieces of each other. To touch on a couple of these value links:
• By developing software early using virtual prototypes, SoC-ready software can be used to pull in hardware-software integration, enabling an early feedback loop for both hardware and software.
• Virtual prototyping power models used for early power/performance analysis of the SoC can be leveraged for development and testing of the power management software.
• Virtual, physical or hybrid prototypes running software can be used to capture workloads from a current project and leveraged as benchmarks for designing the next generation SoC architecture.
• Hybrid prototypes, combining virtual and physical prototypes, leverage the virtual prototype to run the OS on the next-generation application subsystem and develop IP-specific software in context of real world conditions for interface IP mapped onto the FPGA.
A more elaborate overview of end-to-end prototyping can be found in this white paper.
The electronics revolution can both be promising and daunting. If you are or want to be part of it, I recommend you to watch the webinar and/or read the white paper to see how end-to-end prototyping can help you embrace the growing influence of software.
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on November 1st, 2015
Let’s examine the first part of the title of this blog. It is stated as a given. But is it true that you really can’t walk straight when blindfolded? That is what my children and I set out to investigate one sunny afternoon in October (yes we live in California).
We looked for a nice open field with little to no surrounding sound, so that you cannot use the sound to set your bearing. We found one close by at a school soccer field surrounded by an empty school on a Sunday, and a farm, which was also deserted on a Sunday afternoon. My daughter started first, using the eye patches that you receive in United Business class (being a frequent flyer has the advantage of being upgraded sometimes) as a blindfold. And indeed she had a hard time walking straight and ended her first walk in the shape of a nice quarter circle.
My son’s first walk turned out quite similar. Then it was my turn. After four and a half minutes of walking and convincing myself that I was going reasonably straight, this is the walking pattern that the GPS in my pocket, also known as my iPhone, plotted:
As it turns out, you cannot walk straight when blindfolded. And the good news is that in almost all situations you have to, which explains why we can have straight sidewalks.
However, when developing software it turns out that it is hard to get a good view of the entire system. Most of the pieces are black boxes to the software developers, and they have to rely on their experience and high-level knowledge to gain visibility into the software execution of the system to debug issues.
Rather than just seeing the world through the MMU of a particular CPU, virtual prototypes using SystemC models offer developers a bird’s-eye view on the whole system. This means each and every CPU and peripheral register can be inspected. Even hardware signals between peripherals can be probed and watched, such as an interrupt line coming from the timer to the interrupt controller.
Next to that, the programs under execution can be observed or multiple programs on multiple cores can be debugged synchronously. Thus, when advancing the time, the developer will see multiple program code windows updating the contents based on the advancement of the CPU’s program counter. At the same time the developer can observe the interaction with the hardware by being able to look at the platform memories and registers.
Using a debugger, a software developer can validate the state of the system. But once he or she has figured out that the system is not behaving according to the specification, the first question that arises is, “What is going wrong?” Once there is clarity about the “What”, the next thing to figure out will be, “Why are things going wrong?”
“What is going wrong” can be a tough question to answer, however. If you are debugging an OS kernel, it becomes hard to understand what is going wrong from a system standpoint. The OS kernel may get stuck somewhere during the boot. Using a debugger you can observe a multitude of processes that have been scheduled and executed “OK” by the kernel. But if you were able to step away from debugging the implementation and the current situation and get a more global view of the system behavior, also in the past, it may become clear that the OS kernel is trapped in a particular kernel thread waiting for an interrupt.
To make this debug/analysis process more efficient, virtual prototyping tools can provide an OS aware system level analysis framework. As an example you can access a trace of the operating system in the context of one or multiple CPUs over time. This will let you know which processes are completed and in which order this is done. Once you figure out a suspicious situation, such as a data abort during the boot, you can get to a function level trace to get clarity about which function or even which instruction is causing the problem.
In order to analyze the integration between multiple software stacks, you can observe the shared memory communication between multiple CPUs. OS specific extensions to the framework allow you to visualize internal kernel data in a meaningful way such as the kernel debug messages, even before the console drivers are working.
The presented analysis capabilities work without instrumentation or modification of the embedded software. Probes in the virtual prototype models provide the necessary analysis data to the analysis framework.
So if you can get this level of visibility, why would you ever settle with developing software blindfolded?
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on September 27th, 2015
In this month’s blog I would like to focus on a recent prototyping solution announcement from Synopsys. On September 16, Synopsys announced the new HAPS-80 FPGA-based prototyping systems, part of Synopsys’ end-to-end prototyping solution strategy.
As software is now driving the main capabilities of embedded devices, it has taken the spotlight in SoC design. This is turning a once hardware centric electronics supply chain upside down. To cope with this new reality companies are embracing both virtual and physical prototyping technologies.
Physical prototyping, also known as FPGA-based prototyping, is an important piece of an end-to-end prototyping strategy and has long been used by electronics companies as a way to accelerate software development, hardware and software integration and system validation. Its ability to connect to a real-world environment and perform realistic scenarios is a key capability. The ability to perform system validation helps to ensure standards compliance and determine if performance goals can be met. Most importantly the connection with real world I/O can trigger behavior inside the SoC hardware or software that would have otherwise been overlooked.
With growing SoC complexity and time-to-market pressure, there is a need to accelerate prototype bring up while achieving higher system performance and better debug. This is where the new HAPS-80 FPGA-based prototyping solution comes into play. The integration of HAPS hardware and ProtoCompiler software and leveraging Xilinx’s Virtex UltraScale FPGA, HAPS-80 supports configurations up to 1.6 billion ASIC gates and reduces time to first prototype to less than two weeks.
Highlights about the new HAPS-80 Series:
- HAPS-80 systems with ProtoCompiler software delivers up to 100 MHz multi-FPGA performance and new automated high-speed pin-multiplexing
- ProtoCompiler software automates partitioning to reduce time to first prototype to less than two weeks on average
- HAPS-80 enterprise configurations support up to 1.6 billion ASIC gates based on the Xilinx Virtex UltraScale FPGA and enable remote usage and multi-design mode for concurrent design execution
- Built-in debug capabilities are automatically inserted for greater debugging efficiency and visibility, enabling the capture of thousands of RTL signals
- Unified Compile with VCS simulation and Unified Debug with Verdi debug, part of Synopsys’ Verification Continuum platform, eases migration between simulation, emulation and prototyping saving up to months of design and verification bring-up time
More information about HAPS-80 can be found at: https://www.synopsys.com/COMPANY/PRESSROOM/Pages/haps80-news-release.aspx
I would also like to point to a blog from my prototyping colleague Michael Posner: https://blogs.synopsys.com/breakingthethreelaws/2015/09/introducing-haps-80-with-fully-integrated-protocompiler-shifting-the-market-to-an-integrated-prototyping-solution/
In the coming months you will hear more from us about the value of deploying an end-to-end prototyping solution leveraging both virtual and physical prototyping for early architecture exploration, software development and test, hardware-software integration and system validation. The new HAPS-80 FPGA-based prototyping solution is an important step forward to address today’s prototyping needs for a software-driven SoC environment.
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on August 7th, 2015
Lately, my children and I are closely following a new show on ABC called “Battlebots”. The concept is as simple as it is cool—have a massive bulletproof arena where two remote-controlled robots battle it out until one is knocked out or the time is up (and a jury decides the winner). The battles are all about making physical contact with the other robot to either directly deal them damage or push them into the hazards of the arena.
With names like Stinger, Captain Shrederator, Ice Wave and Tombstone, it is clear that the show is all about intimidation and carrying the biggest “stick”. That “stick” can be a rotating blade, flipper or flame thrower. So no wonder that we are on the edge of our seats when a battle is going on. Or like my wife says: Men will always stay boys.
During the battles, the perception of each robot’s abilities often quickly changes. It turns out that some robot designs that looked great on paper, and even after the robot has been built, simply don’t match up against other robots. Some cases are harder to predict. Who would have thought that a small pusher bot could beat a massive looking reptile like spinner bot? In other cases, total destruction is pretty predictable. It just isn’t a good idea to have a robot mostly made of plastic to go up against another robot with a massive blade spinning at more than 300 miles per hour.
So it begs the question: How can you build something and make sure that it will actually perform the way you envisioned?
While less spectacular to watch, financial damage from having an underperforming smartphone can be bigger than the damage robots suffer in Battlebots. If, as a semiconductor company, your new smartphone performs weakly in a benchmark like Antutu, it will be hard to make any money on that phone.
With so much at stake, how do you minimize your risks? If you can reduce your time-to-market you have a better chance of being the first to market to reach a certain benchmark score based on the latest processor and implementation technology. It also would help if, early on, you can optimize your design to perform as well as possible against the benchmark.
Virtual prototyping helps you to do both. In a previous blog post I talked about the value of virtual prototyping to parallelize software development alongside the hardware schedule and shift left the entire product schedule. In context of winning the battle of outperforming key benchmarks against competing products, I would like to zoom in on the value of virtual prototyping for early power and performance optimization.
While spreadsheets are good for aggregating data, static spreadsheet calculations are not accurate enough to estimate performance and power and make design decisions. Dynamic simulation is needed. Traditional RTL simulation is too slow and lacks the configurability and visibility to analyze performance. In addition, the RTL may simply not be available. Risks include over-design, under-design, cost increases, schedule delays and re-spins. All of this might lead to being late with a new smartphone SoC or underperforming in a key benchmark.
With SystemC-based virtual prototypes you can capture, simulate and analyze the system-level performance and power of multicore systems early on in the design cycle. This enables system designers to explore and optimize the hardware-software partitioning and the configuration of the SoC infrastructure, specifically the global interconnect and memory subsystem, to achieve the right system performance, power, and cost.
By doing this early on, it is much easier to tune the SoC for specific workloads and scenarios, hence preparing the design for realistic usage. So rather than hoping for the best, you actually design to get the best.
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on June 25th, 2015
A couple of weeks ago I was with a virtual prototyping user who described the benefits his company has seen from deploying virtual prototyping for early software development. The use of virtual prototyping has been rolled out progressively to more projects over the years, making it possible for the company to measure its impact on the software availability schedule and the impact has been dramatic. He stated that the software team’s stance on virtual prototyping has changed from “we don’t want it” to “we will tolerate it” to “we want more”.
In fact the shift left of the software development schedule and the availability of quality software through the use of virtual prototyping has been so dramatic that software availability, in a lot of cases, is no longer the long pole in the tent when it comes to releasing a new SoC.
Given the increased importance of software driven use cases and the explosion of software that has to run on a SoC, verifying the hardware in the context of the software has become the critical path.
However, this doesn’t mean that shift left of software development is becoming less important in favor of shift left of the hardware design and verification. In fact, it means that the software needs to be available even earlier as it is now a key factor in verifying the hardware. The result is a double shift left of hardware and software.
It also expands the role of virtual prototypes. Thus far the emphasis has been mostly on enabling early software development to ensure software availability alongside the hardware. Now virtual prototypes can help in accelerating software availability to drive the hardware verification and the hardware/software integration. Plus a hybrid setup of virtual prototype and hardware emulator can provide much faster execution of the software stacks as part of the software-driven verification step.
As discussed in a previous blog, hybrid emulation mixes the speed of a virtual prototype to run the application subsystem with the ability to verify the rest of the SoC mapped onto an emulator.
With the continuous growth of software content and complexity, and increasing time to market pressure spilling over from mobile to all other electronics driven markets, including automotive, shift left is more important than ever. In fact the shift left concept has become a universal term across software and hardware. This means that an investment in virtual prototypes pays off even more so than before. Since they enable earlier software availability and accelerate software driven hardware verification (in context of a hybrid emulation setup), their use has become a no-brainer. At least that is what users are telling me …
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on May 31st, 2015
I recently talked to an engineering manager responsible for system validation at a major automotive company. The topic was the continuous growth of software content and how to reach the right software quality. He explained that for the part he is responsible for, most software is created by his suppliers. But because the carmaker is ultimately held responsible for any issue with the car, he has to define rigorous requirements that suppliers are required to meet. This applies to both the hardware and software that is delivered to him.
The suppliers need to provide a test report with every deliverable. The carmaker reviews the test report and asks clarifying questions to ensure that all requirements are met, plus his team performs additional tests, focusing on corner cases that they have learned about from past experiences.
While testing in general is something that most engineers dread, they understand that it is an important part of the overall development process. So a lot of time and effort goes into writing and running tests. But the problem is that not every piece of hardware or software is easy to test, especially when it comes to interfacing with people or with other sub-systems. In an effort to prevent issues from happening when the end product is used in its target environment and will be subject to inputs from people, engineers try to capture the most realistic scenarios possible. That typically means exercising scenarios by having it actually interact with persons or with other external stimuli. This makes for some important real-life testing.
But here’s the catch: People get lazy. Or maybe more accurately, people quickly get bored with doing the same thing over and over again. This is how the system validation manager at the automotive company described it to me: ‘When we need to have manual tests for a particular piece of hardware or software, we have to continuously rotate the engineers who perform the test.’ He had observed that after performing the same test more than two times, his engineers start to do things slightly differently. They assume that a particular piece of the test is less important because it worked yesterday and the software change was really not that big. All of us take shortcuts because we get lazy or bored. If something worked for the last 37 times, why wouldn’t it work for the 38th time? So the system validation manager’s goal is to automate all tests, as much as possible, and he asks his suppliers to do the same.
The problem is that not every test is that easy to automate. In automotive, as in other markets, first hardware is rare and expensive and it functionality typically cannot be fully automated. This was exactly why I was meeting the system validation manager at the automotive company in the first place, which brings us full circle. He was very excited about the fact that virtual prototypes can offer an alternative to the hardware, and as such enable earlier, broader and more automated software testing. Combining additional control and visibility capabilities with better scalability makes virtual prototypes the ideal “vehicle” to further automate tests for the vast amount of software that makes its way into our cars.
Car manufacturers and their supply chain are embracing virtual prototypes as a way to create more, better and above all automated test suites for the software in your future car. As an added bonus, this enables their engineers to focus on innovation instead of getting bored running the same tests over and over again manually. That is what I call progress.
Posted in Uncategorized | Comments Off
Posted by Tom De Schutter on April 26th, 2015
Almost all electronics devices have some way to connect to other devices. While we don’t really think about it a lot, these interfaces actually have to be quite smart and need to deal with a lot of different device types and/or handle a great deal of data, preferably all while consuming as little power as possible.
As a result, device drivers for this type of interface IP are non-trivial. And because they are a key piece to making an SoC work, these device drivers have to be available early in the SoC design cycle. This is where virtual prototypes come in. They enable device driver development long before hardware is available. Plus, they help accelerate the software development and testing by providing superior debug and tracing, repeatability and scalability.
My colleague Achim Nohl just recorded a webinar on this topic: Accelerate DesignWare IP driver development for ARMv8-based designs with Virtualizer Development Kits.
DesignWare IP VDKs
In the webinar he explains how Virtualizer Development Kits (VDKs), software development kits using a virtual prototype as the target, can be used to accelerate driver development for interface IP, more particularly the industry-leading DesignWare Interface IP. The webinar demonstrates how a Synopsys VDK for the ARMv8 Base platform with models representing specific DesignWare Interface IP like USB 3.0/3.1, Ethernet GMAC, Ethernet XG-MAC, PCI Express, UFS, Mobile Storage and so on, enable early and efficient software development for these interfaces.
I highly recommend viewing the recorded webinar presented by Achim Nohl via the provided link above to learn how to maximize the benefits from virtual prototyping through VDKs.
In the meantime we can all continue to benefit from the connectivity that our electronics devices offer. I for one am looking forward to a less frustrating USB connection future with the reversible USB plug (type C USB 3.1 reversible USB cable.)
Not all innovation has to do with software :-).
Posted in Uncategorized | Comments Off
| © 2016 Synopsys, Inc. All Rights Reserved.