Posted by Michael Posner on October 20, 2014
It’s the age old question, what came first, the chicken or the egg?
When we ask this question about FPGA-based prototyping then the answer uncovers some interesting facts about the evolution of this technology. When first utilized most customers would build their own boards and tailored them to the exact SoC project’s needs. The advantage of this is that the board is specifically designed for the SoC, meaning it included the exact real world interfaces needed and an interconnect architecture specifically matching the SoC architecture. The result of this customization was the best FPGA-based prototyping hardware for the SoC project. Of course because the hardware was customized for the exact project needs the hardware was typically not reusable in the next projects.
To address the need of reusability along came 3rd party commercial FPGA-based prototyping boards. They offered a generic FPGA-based prototyping hardware solution that enabled reuse across projects. But at what technical cost we ask ourselves? The boards offered many real world interfaces but the interconnect (PCB traces) were not customized to the needs of the SoC being modeled. The result sometimes was that the highest performance could not be reached as you had to force fit the SoC prototype implementation to the fixed interconnect topology. This force fitting means that high signal multiplexing ratios were needed reducing the system performance. I blogged (How IO Interconnect Flexibility and Signal Mux Ratios Affect System Performance) about the relationship between these before. The HAPS-50 and HAPS-60 systems provided both PCB traces and flexible connector options which started to address the need for commercial hardware that could be customized directly to the needs of the SoC. The HAPS-70 systems revolutionized this approach by providing the ability to tailor the system to the exact requirements of the SoC using intelligent high performance links. I have previously blogged (UFC: Cables Vs. PCB Traces) about the fact that the performance of these flexible links is as good as pure PCB traces. I’ve also blogged (The Secret Ninja-Fu for Higher Performance Prototype Operation) about how this flexibility enables higher performance prototypes.
But the problem is not that simple to solve. SoC prototypes are multi-FPGA so sometimes large blocks could be split up across multiple FPGA’s which add new interconnect requirements. So how are these handled as you don’t know about them until you come to partition the SoC? The answer is that you need an integrated solution which can quickly generate a partition from an associated interconnect architecture but also provide the flexibility to adapt it. This is what Synopsys calls the abstract partition flow with ProtoCompiler and HAPS-70. In summary, the combination of ProtoCompiler and HAPS-70 enables you to quickly create an abstracted interconnect architecture representation, generate a partition solution for it, then incrementally customize it based on the needs of the SoC. Let me share an example which was from the Imagination PowerVR 6XT on HAPS collaboration case study presented at recent SNUG events.
In the ProtoCompiler flow for HAPS-70 you first create an abstract representation of the interconnect between FPGA’s. This is very quick to create as it’s a simple text file with TCL commands defining the connections. The picture below is an example of such an abstract system interconnect. Remember that there are no fixed traces between FPGA’s but this abstract is not exact connections, just a representation of possible IO interconnections. Then run ProtoCompiler
ProtoCompiler in this case study took less than 1 minute to come to a five Xilinx Virtex-7 FPGA partition. Remember that ProtoCompiler is HAPS-Aware so it incorporates the hardware capabilities in automatically. The picture below shows some of the reports from ProtoCompiler at this point. Firstly the expected FPGA utilization and secondly, the most important, signal to multiplexing ratio report.
The mux ratio report has highlighted the worst case mux-ratio, 16, on a path from FPGA A to FPGA D. Remember that the higher the mux ratio the lower the system performance. Within 1 minute ProtoCompiler not only partitioned the design but it also identified the main bottleneck based on the abstracted interconnect architecture. The flow is incremental so at this point you go back to the abstract file.
We know the HAPS-70 interconnect is flexible so in the abstract flow we theorize that we need more physical IO between FPGA A and FPGA D. In our case we are going to raise the IO count from 200 to 300. This is a one line change in the abstract file as seen in the below picture.
ProtoCompiler is re-run and again generated a new partition result in a matter of minutes. Looking at the mux report now you can see that the more dense IO between FPGA A and FPGA D has relived the multiplexing ratio. The new ratio is 12 which means our prototype will run at higher performance. This is the solution to the chicken and egg question, you don’t want to fix your hardware interconnect architecture until you have a partition solution. Then based on the partition solution you want to fine tune the hardware to best match the partitioned SoC requirements. The Synopsys solution of ProtoCompiler and HAPS-70 is the only integrated solution that provides this capability. This rapid and incremental flow results in hardware that is tailored to the exact SoC prototyping requirements and of course you maintain the reuse aspect as the hardware can be reconfigured to your next SoC project’s needs.
Finally….. I have been build a new toy, this is a conveyor belt toy with working conveyor belt, articulation and many lights.
You can see the toy in action here: https://www.youtube.com/watch?v=2aVdOXWo-2Q
Comments are closed.