HOME    COMMUNITY    BLOGS & FORUMS    Breaking The Three Laws
Breaking The Three Laws
  • About

    Breaking the Three Laws is dedicated to discussing technically challenging ASIC prototyping problems and sharing solutions.
  • About the Author

    Michael (Mick) Posner joined Synopsys in 1994 and is currently Director of Product Marketing for Synopsys' FPGA-Based Prototyping Solutions. Previously, he has held various product marketing, application consultant and technical marketing manager positions at Synopsys. He holds a Bachelor Degree in Electronic and Computer Engineering from the University of Brighton, England.

Archive for the 'Uncategorized' Category

How Proven Solutions Reduce Risk

Posted by Michael Posner on 19th December 2014

How Proven Solutions Reduce Risk

A couple of weeks back Synopsys announced that we had shipped, to date, over 5000 HAPS systems across more than 400 customers

Synopsys Ships Over 5,000 HAPS Prototyping Systems for Software Development, HW/SW Integration and System Validation

HAPS FPGA-based Prototyping Systems Help More Than 400 Companies Accelerate Time to First Prototype and Avoid Costly Device Re-Spins

This week a prospective user asked me why they should care how many systems Synopsys had shipped. It’s very simple I said, it reduces your risk. Now why this reduces risk takes a little longer to explain as there are many aspects of risk.

A proven track record,  such as the one Synopsys has with HAPS, is very important to risk reduction. When you buy a product from a company that has delivered multiple generations of a product as well as shipping in volume you know that the products are field proven. This means that your project is less likely to run into surprise issues reducing your projects risk. Risk reduction like this is always hard to measure but cannot be ignored.

A track record like this also means that the company is invested heavily in the success and support of the product. Synopsys has over 240 engineers involved with the FPGA-based prototyping products meaning that you can be assured of timely updates, great support and new generations of products right when you need them.

HAPS Units Shipped

I recommend you all check out the latest Synopsys Insight publication. There is a great article titled “Prototyping Imagination’s PowerVR Series6XT Dual-Cluster 64-Core GPU” which documents prototyping the Imagination GPU using ProtoCompiler and HAPS.

Synopsys Insight Article. Imagination PowerVR 6XT on HAPS using ProtoCompiler

If you like this or other previous posts, send this URL to your friends and tell them to Subscribe to this Blog.

To SUBSCRIBE use the Subscribe link in the left hand navigation bar.

Another option to subscribe is as follows:

• Go into Outlook

• Right click on “RSS Feeds”

• Click on “Add a new RSS Feed”

• Paste in the following “http://feeds.feedburner.com/synopsysoc/breaking”

• Click on “Accept” or “Yes” or whatever the dialogue box says.

Merry Christmas and a Happy New Year !!!!!!!!!!!!!!!!!!!!!

Posted in Uncategorized | No Comments »

Reuse ROI Proof Point, USB 3.0 SSIC across MIPI M-PHY with a slice of HAM

Posted by Michael Posner on 22nd May 2013

I have to hand it to Eric Huang and Hezi Saar they make entertaining videos that turn USB 3.0 and MIPI M-PHY from boring to wow. Check out their latest video which shows Synopsys’ DesignWare USB 3.0 RTL controller running on the HAPS-51 (-2) systems with Synopsys’ MIPI M-PHY. http://blogs.synopsys.com/tousbornottousb/2013/05/17/industrys-first-demo-of-usb-3-0-ssic-and-mipi-m-phy-passing-usb-compliance-tests/  

While not directly shown in the video this setup is running on the HAPS-50 series systems. The HAPS-50 series was first launched in May of 2007, that’s 6 years ago. One of the key benefits of HAPS is the ability to reconfigure and tailor the system to multiple uses across multiple projects. The return on investment increases each time the systems are reused.

Reuse ROI is one of the factors that should be considered when you look to either build your own FPGA board or invest in Synopsys’ HAPS systems. Typically this reuse ROI is forgotten when calculating the cost of ownership of each solution. An in-house board is typically designed with a specific project in mind and its reuse to the next project or with another group in the same company is very limited due to this specific functionality. The generic nature of the HAPS systems mean they can be reused over and over and over again across multiple projects and teams.

I’m out on vacation for a week so unless I find a guest blogger I won’t be posting for two weeks. Don’t forget about me!

What topics would you like to see me blog about? Comment below and provide me some ideas.

Posted in Uncategorized | Comments Off

How IO Interconnect Flexibility and Signal Mux Ratios Affect System Performance

Posted by Michael Posner on 15th February 2013

One of the “Breaking The Three Laws” is that your SoC partitioned blocks typically have more signals than physical IO’s on the FPGA. Technically this is not one of the three laws but it should have been and as I own this blog I can make one more up. Welcome to the Breaking The Four Laws blog. During a recent engagement for the HAPS-70 the user wanted Synopsys to create a demonstration proving the HAPS High Speed Time Domain Multiplexing, HSTDM and Certify tool automation capabilities. The challenge; Create a design which passed 1800 signals between four FPGA’s using less than 200 physical IO’s.

As you can see from the block diagram above the 1800 signals would be passed between FPGA’s and compared at each point proving that the transmission and receive of data was valid. This was an easy design to replicate for Synopsys as we create similar designs to qualify the HSTDM operation. Based on the users constrains we were able to achieve the following results

  • System: HAPS-70 S48 (-1 speed grade)
  • #Signals multiplexed per set: 1824
  • HT3 connectors used for signal set: 4 (200 IO’s, 96 differential pairs)
  • HSTDM Ratio: 24
  • HSTDM Frequency: 1.1 Gb/s
  • Resulting System Frequency: 17 MHz

We successfully implemented four fully independent HSTDM channels between four FPGAs, Multiplexing 1824 design signals across four HapsTrak 3 connectors. The user required the use of only four HT3 connectors this results in 96 differential pairs available for HSTDM. Based on the requirement of transferring 1800+ signals this results in a required HSDTM ratio of 1824/96 = 19. HSTDM supports multiples of 8 so a ratio of 24 was used. As stated, using a HSTDM factor of 24, we achieved a design system clock frequency of 17MHz.  HSTDM transfer clock rate is 1.1 Gb/Sec and this is fully operational on the -1 speed grade HAPS-70 S48 system. Due to the use of the HSTDM ratio of 24 we have un-used HSTDM channels in the current implementation which could be used for other purposes if needed.

But wait !!!!!        The HAPS-70 systems has far more available IO and the rule of thumb is the lower the mux ratio the higher the overall system performance. Each HAPS-70 FPGA-module has 23-user HapsTrak3 connector available, if we were to increase the number of connectors and cables used by 1, so a total of 5 HT3 connectors for this (1800-IOs) data exchange, we still have 18 more connectors available.

Our recommendation was that by simply increasing the number of HT3 connectors used per link will result in far higher system frequency. By using one more HT3 connector, so a total of 5 per link, the HSTDM ratio will be reduced to 16. So we implemented it and below are the results

You can also see that we also proved that by using more HT3 links that the mux ratio could be reduced even more, down to 8 resulting in a system frequency of over 24 MHz.

This example shows the advantage of the flexible interconnect of the HAPS-70 systems combined with the HAPS HSTDM technology with the result being one very happy user. For those of you doing FPGA-Based prototyping, are you able to get similar results?

Posted in Uncategorized | Comments Off

Globally located R&D enablement, SRAM Daughter Boards & High Speed IO

Posted by Michael Posner on 5th February 2013

This week I’m visiting one of our R&D teams based in Erfurt Germany. I took the opportunity to take some photos of the HAPS-70 development systems along with a number of the off-the-shelf daughter boards which are available from Synopsys as part of the solution.

First let me introduce Andreas Jahn, R&D Manager pictured above with the HAPS-70 S48 (far left), HAPS-70 S24 (right bottom) and the HAPS-70 S12 (right top).Note that the HAPS-70 S12 is connected to the Universal Multi-Resource Bus, UMRBus, by the blue Control and Data Exchange cable. The UMRBus enables the system to be access remotely. This is a key capability enabling global accessibility which is important to Synopsys as we have both local teams and remote teams working with the development systems. Just like Synopsys, many customers have globally located teams handling all sorts of tasks such as hardware validation and software development. The UMRBus enables the HAPS systems to be globally accessed meaning that locally based hardware is not required.

Below is a picture of one of the HAPS-70 S48 systems in use for SRAM daughter board testing. The SRAM daughter boards are located on the left side of the system, multiple are installed and a number are stacked up on top of each other.

Tests were being executed against these SRAM boards, again, all controlled via the UMRBus. The Identify team were using these SRAM daughter boards to continue development of the HAPS Deep Trace Debug, HDTD, capability.  HDTD provides off chip storage for debug sample data meaning that you are not reliant on FPGA devices on-chip memory. HDTD enables a large window, 100x or more when compared to on-chip memory, of debug data to be stored. On the same system you can see that many links have been configure with the high performance coax cables. This system was setup to mimic a customer design and their expected interconnect between the FPGA’s. This was setup to prove that many thousands of signals could be passed across a very low number of physical interconnect IO’s utilizing the HAPS High Speed Time Domain Multiplexing, HSTDM. As HSTDM is automated as part of Certify it was easy to setup a design and validate the HSTDM operation for this specific example.

Finally above is a zoomed in picture of the Multi-Gigabit, MGB, riser and adapters for both PCI Express and SATA. The HAPS-70 enables direct access to the Xilinx devices native transceivers. As part of the solution Synopsys offers a set off-the-shelf MGB daughter boards which link the transceivers to a standard connector. Nice view of the user panel which houses the SD Card which is used for standalone boot configuration. This mode is liked by the software developers as they can quickly load the new build images from their hardware team. Just load the SD card and plug it into the system.

If you spot anything else in the pictures that you want to know about ask the question via the comments

Posted in Uncategorized | Comments Off

HAPS-70 Receives “Best Innovation” Award

Posted by Michael Posner on 31st January 2013

Happy New Year and all that. I hope that you all took at least some time over the break to relax and reflect on your achievements in 2012. I would have blogged sooner but we had a spot of bother with the Synopsys website. Anyway it’s full steam ahead now.

We got a boost to kick off the year, we received this in the mail


Electronic Design http://electronicdesign.com/ selected the HAPS-70 to receive one of their “Best of 2012” awards. This award recognizes the HAPS-70 Series as an outstanding EDA product by Electronic Design.  Selection for such an award is based on great product innovation over the last 12 months, chosen by editorial experts covering EDA. Here is a quote from the Electronic Design website explaining how they select the Best Innovation products to receive the awards.

We rely on the expertise of our staff and contributing editors to ferret out the ‘best’ of the various new technologies, products, and standards that we have seen and wrote about throughout this past year,” said Editor-in-Chief Joe Desposito. “These guys are in the trenches every day covering this industry, and they know about all the great new innovations that have been introduced and what really works!

Here is a picture of the award being held by a very proud Neil Songcuan, Senior Product Marketing Manager for the HAPS systems.

Neil was instrumental in the development of the HAPS-70 driving the major capability definitions. Neil is not a fan of this picture because we sort of threw the award at him and pushed him up against the wall so we could take the picture. Neil said that if he had been given more time he would have brushed his hair ;)

Check out all the “Best Of 2012” Here: http://electronicdesign.com/article/embedded/emelectronic-designem-announces-2012-electronic-design-award-winners-74727 You will see the Synopsys HAPS-70 systems listed in the EDA section, here is a direct link to the HAPS-70 repost: http://electronicdesign.com/article/eda/eda-tools-faster-easier-2012-74709

Regardless to say we are all very happy about receiving this award, especially our dedicated R&D teams. This award recognizes their innovation and hard work. Please help me congratulate the team and their efforts by posting a comment below.

Synopsys Article: http://www.synopsys.com/Company/PressRoom/Pages/HAPS-70-Wins-2012-best-electronic-design-award.aspx

Posted in Uncategorized | Comments Off

The Smörgåsbord Blog

Posted by Michael Posner on 25th July 2012

I have many things to cover in this week’s blog, first is nothing to do with FPGA-Based Prototyping, it’s a shout out to my friend Eric Huang. Eric is the Product Manager of the DesignWare IP for USB 3.0 products and the source of my amusement last week. The same week Eric had a biking accident which was bad enough that he was rushed to hospital in an ambulance. Eric is back at work but I urge you to wish Eric all the best by posting comments in his Blog, To USB or Not to USB. Wishing you a speedy recovery Eric.

Last week Ed Sperling summarized my blog as follows:

Synopsys’ Michael Posner pays homage to one of his colleagues and fellow bloggers, Eric Huang—who was alive and well at last sighting—while focusing at least tangentially on FPGA prototyping. We’re not quite sure of the real purpose of this blog, but it’s hard to stop laughing.

Thanks Ed, I think that was a complement. Of course did you know that Eric had injured himself? If not you have some serious physic skills.

Onto business. This week I was asked if there was a way to quickly validate that a block modeled in an FPGA-based prototype was functionally correct? The issue the user was trying to solve was the verification of the DUT’s functionality in its FPGA-based prototyping form BEFORE it was rolled out for SoC validation and SW development tasks. To date the user would have to re-create the golden simulator regression testbench in a format that they could execute against the FPGA-based prototype. This was a large effort as the golden simulation regression test suites are typically very comprehensive. The user wondered if there was a better way to do this?

The Answer is YES, HAPS Co-Simulation

The HAPS FPGA-based prototyping systems offer a Co-simulation capability which enables a simulator testbench to drive the DUT with the HAPS system over a cycle accurate interface. This flow is automated so it’s relatively easy to take a block from RTL to HAPS and then verify it’s function against its original golden simulation regression test bench. This is ultimate re-use, the user does not have to create a new specific FPGA-based test bench to test the DUT model, just re-use what the RTL verification team has already created. Not only does this reduce the effort of the FPGA-based prototype bring up but the user can further test the DUT in this high performance environment.

Typical design flow for HAPS Co-Simulation

Great right? Well the user was still worried, they wondered how stable this HAPS Co-Simulation flow was? The answer is it’s very stable. We introduced HAPS Co-simulation capabilities as part of the HAPS-60 launch back in 2010. The underlying capability and UMRBus technology was part of the CHIPit products so have been available for many years before that.

Do you have questions on FPGA-based prototyping? Post a comment and I’ll do my best to answer

I have some much needed vacation time coming up so look out for future postings by some guest bloggers. Please be nice to them.

Posted in Uncategorized | Comments Off

Keeping Your RTL Clean: Part 2

Posted by Doug Amos on 9th March 2012

Hi Prototypers,

Following on from our intro to wrappers, and the excellent discussion both here and on LinkedIn groups, I’d like to go into more detail on how we can use wrappers for memories.

As a recap, when mapping SoC RAMs into FPGA it is necessary to adapt the RTL so that the FPGA tool flow can map them into the appropriate resource. We can do this without changing the existing RTL, but instead, we add extra RTL files to act as adaptors; we call these wrappers.

During prototyping, the wrapper contains an item to be implemented in the FPGA, but which has a top-level boundary that maps to the component/module instantiation in the SoC RTL. Experienced prototypers will be very familiar with wrappers and may indeed have built up their own libraries of wrappers for use in various situations. We always like to hear about your experience and methods, so feel free to comment below.

Basic concept of a wrapper, showing two memory instances (From FPMM book, page 197)

The diagram form the FPMM (above) shows the basic arrangement, in this case of two wrappers used in the same level of hierarchy. Notice that the SoC physical memory is not instantiated directly into the RTL, thus keeping that RTL clean (i.e. not target-specific). The physical memory for the target, usually a generated memory cell for the SoC, is written into the lowest level of hierarchy only. The logical memory instantiation should be chosen to be as generic and reusable as possible.

A naming convention for the logical memory and its ports could be adopted within a project or even across an entire company. Once such an in-house standard is in place, it is easier for prototypers to have a ready-made FPGA or board-level equivalents to fill the wrappers in place of the physical memory.

If a wrapper is not used, and the target-specific physical memory is placed directly into the SoC’s RTL, then it will appear as a black box in the FPGA flow. The prototypers will still need to fill that black box; only now the top-level boundary of the black-box has the names and ports of whichever target-specific physical memory was instantiated, including BIST ports and all. In those cases it is more difficult to understand exactly what memory function is required and less likely that the prototype team will already have an FPGA-ready version, e4specially is your company uses memory cells fomr different library vendors.

As pointed out in your comments to previous blog postings, a sophisticated approach to swapping the contents of wrappers is to use an IP-XACT description for the wrapper, so we can automate the use of different fillers. A simpler approach is also available in VHDL by the use of Configuration statements, which control the choice between different Architectures for the same Entity. Whichever method is used to control the use of an FPGA or SoC filler for the wrapper, the FPGA version still needs to be created.

I find that the good way to start creating a wrapper is to copy the component/module declaration from the SoC RTL and paste into a new RTL file. This ensures that the boundary of the lower level matches the higher-level instantiation.

Naming can be adopted as standard. for example, in the FPMM workshop labs, we used parameterized memory wrappers with names such as M64X14H1M8S00_wrapper. The M means memory, 64 is the depth in words and 14 is the width of the word, and so forth. Here, we are following the naming scheme used for Synopsys’s own memories (previously Virage logic) but you can adopt any suitable naming style of your own.

We then have to find the equivalent memory for use in the FPGA. The best way to do that (for internal memories) is to let the FPGA synthesis tool infer the memory from an RTL description. An excellent script is available, created by my friend Peter Calabrese, of Synopsys Boston, which parses the wrapper names and creates RTL from which the FPGA tool can infer the necessary rams. If you want to adopt an in-house wrapper scheme, then this might be an easy place to start. Please comment below to email me at fpmm@synopsys.com to let me know what you think.           


Posted in FPMM Methods, Uncategorized | 2 Comments »

“Verification or Validation? What do you think?”

Posted by Doug Amos on 6th February 2012

 Woulf you kindly help me clear something up?

A colleague and I were having a lively argument about the difference between Validation and Verification and it got me thinking. What’s the difference anyway? What do we mean by these terms and do you and I mean the same thing?

Winning the argument seemed pretty important to me at the time, so I did some deep and extensive research (oh, alright, I Googled it) and found the Wikipedia definitions, as follows . . .

  • Verification is a quality control process that is used to evaluate whether a product, service, or system complies with regulations, specifications, or conditions imposed at the start of a development phase.”
  • Validation is a quality assurance process of establishing evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements.”

These definitions appear to say the same thing but if you dig into the semantics there IS a difference. A friend at ARM put it very nicely in a presentation to a DVClub Conference in the UK late last year. He said the difference was highlighted in the two questions . . .

 I’m pretty sure I’ve heard this before and it does appear many places on the web so my ARM friend may not be totally original here, but the questions are good questions, none-the-less (if you know where they originate, then please let me know).

What we infer from the questions are the following. . .

Verification is a matching of results to specification. It is a methodical process of proving something does what you asked for, and nothing else. The specification is taken as golden. The aim; a proof that the design meets the specification.

Validation, on the other hand, is the exercising of the design to check that it is fit for purpose. It is a subjective process of using the design, perhaps in situ, definitely with the embedded software, to see if it does what you need. The specification is NOT golden and in effect is under test along with the design. The aim: a proof that the design AND the specification meet purpose.

So where does FPGA-based Prototyping come in? Well, here’s what you told us. In answer to a question in the FPMM download survey, nearly 2000 users kindly shared the following data about their reasons to use FPGA-based Prototyping. . .

Wait a minute; this says that people use FPGA-based Prototyping to “Verify” the RTL, or to “verify” the hardware and software co-design. So, is FPGA-based Prototyping a verification technology? Clearly, looking at the most popular answer, prototypes do expose RTL bugs that sneak through the normal verification process. How is that possible?

A Safety Net for Verification?

Verification is impacted by the classic speed-accuracy trade-off. We can have high accuracy in an RTL simulator and even go to the gate level, but speed is so far below real-time that some tests simply take too long, even on an accelerator or an emulator. On the other hand, high-level modelling in SystemC and other virtual prototypes gives us much better performance but we no longer have cycle-accurate results. Only FPGA-based Prototyping offers the unique combination of high-speed AND cycle-accuracy, allowing longer and more complex tests to be run on the prototype than in a Simulator, catching otherwise unseen RTL bugs.

So, Is FPGA-based Prototyping a verification technology? No;  It lacks the necessary observability, controllability and determinism required for the objective testing of RTL, however, we could quite rightly consider it as a “safety net” for verification.

Objective vs. Subjective

Because Verification is an objective comparison of results against the specification, there is massive scope for automation, for example as in the UVM and VMM methodologies. Validation, however, is more subjective and so less easy to automate, relying more on the expertise of the prototypers themselves. Prototypers need to see the system running, in the real environment actually performing the task for which the specification was created. We may also choose to exercise the design outside of the specification’s envelope in order to explore further optimisation, or to improve upon the specification. There is an emphasis on debug skills and in-lab investigation.

FPGA-based Prototyping is a Validation Technology

Looking back at your survey responses, we see that “System Validation”, “System Integration” and “Software Development” are also popular uses for FPGA-based Prototyping. These use modes are definitely in the validation camp. Here we are using the FPGA-based Prototype as a substitute for the first-silicon and in effect, we are running early acceptance tests on the design and its software. Once again, we are taking advantage of its unique combination of high-speed and accuracy.

In many cases the FPGA-based Prototype is used as a real-world platform upon which to exercise the software, especially software at lower (physical) levels of the stack. Of course, we may find some RTL bugs when we run the real software at speed (kudos indeed to the verification team if we don’t!) and this is an excellent by-product, however, that was not the prototype’s original purpose. Simulation and Emulation are better for verification while FPGA-based Prototyping is better for validation. Virtual prototyping also falls into the validation camp, with emphasis on the higher levels of the software stack and pre-RTL stages of the design.

If finding RTL bugs is your purpose, then simulation and a good verification Methodology will be your best bet. If exercising software and validating the system is our purpose, then prototyping is a far better choice than any verification technology.

In the end, most SoC teams will use both and value the contribution of each equally.


*** ENDS ***

Posted in Uncategorized | Comments Off

Keeping Your RTL Clean: Part 1

Posted by Doug Amos on 18th January 2012

It’s pretty obvious that if we can avoid FPGA-hostility in our designs then the prototype will be ready sooner. However, we can’t expect RTL designers to compromise the final chip design just in order to help us prototypers. So that’s why we advocate Design-for-Prototyping as the way to make the design more robust and portable so that everybody wins.

Most, if not all, chip design teams will work to a given RTL style guide but we wonder how many of those style guides include steps to avoid FPGA hostility. We don’t need the design to include specific FPGA elements or features but at least it should allow us to make use of those in our prototyping efforts without too much extra effort. One sure way to do this is to use wrappers.

What is a wrapper?

The term “wrapper” may mean different things to different people, but we think of a wrapper as an RTL construct that keeps technology-specific design elements from “contaminating” the surrounding RTL. For example, as seen in figure 1, if the chip design calls for the instantiation of a specific leaf cell from the silicon library, then we do NOT write this directly into the RTL. Instead, we declare a new module in the RTL where we need to make the instantiation, and then write the leaf-cell inside that new module (preferably in a separate file). The new module is called a wrapper.

Figure 1: Wrapper contents changed to suit RTL target

This simple step of creating a module sub-hierarchy for the instantiation means that we can change or remove its contents )in this case the particular leaf cell) without having to change the surrounding RTL. It also means that the surrounding RTL is kept technology-independent i.e. is not “contaminated” by the leaf-cell instantiation. After all, it only takes one leaf-cell to make the whole of the RTL at that level technology-specific.

Consider a simple case where the chip design requires that we add a voltage level-shifter on a signal. The designer should NOT instantiate the level shifter cell into the RTL directly, but instead place it in a wrapper and use that wrapper in the RTL. Later, the content of the wrapper could be replaced, for example with a new level-shifter for a different silicon technology i.e. the RTL is already more portable. More importantly from the point of view of a prototyper is that we don’t need level shifters during prototyping, and it is trivial to replace it with a pass-though wire. It is also possible to automate this replacement, and some sophisticated users even have a file-list generation tool which will reference different files with their different definitions of the wrapper module depending on the target technology.

Are wrappers essential?

Figure 2: Certify creates sub RTL for library leaf cell

If you are not lucky enough to receive such nice portable RTL but instead our example leaf-cell is instantiated directly in the RTL, then the work around is to create a new definition for the cell for use during prototyping. You will need to do something anyway, because the FPGA tool chain will otherwise treat the leaf cell as a black box. The good news is that if you have the source .lib file for the silicon technology library which includes the leaf cell definition then the Certify tool from Synopsys will automatically reference the .lib file. Certify will then create functionally equivalent RTL for the leaf-cell and use it in the rest of the FPGA tool flow, as shown in figure 2.

So much for leaf cells; the real power of wrappers comes when we can agree an internal style and naming policy and can start to automate the content creation for prototyping, based on the boundary names. etc. In the next blog post we will explore that and go on to look at the much more interesting case of using wrappers for instantiating memories.

We hope it helps!

Doug and Mick

Posted in Uncategorized | 2 Comments »

Partitioning Poser #1

Posted by Doug Amos on 10th November 2011

There are those who say that one should partition so that only sequential eements are allowed to appear at the FPGA edges, thus simplifying the timing and constraints at the FPGA pins.

This is an excellent goal but one that we can not always reach.

We often need to partition combinatorial paths across FPGA boundaries and then we need to ensure good time constraints for the synthesis and, especially, for the P&R for each end of the path. Don’t forget, when implemented in the individual source and destination FPGAs, each part of the path is considered in isolation. The synthesis and P&R tools have no knowledge of the other end of the path and will assume by default that the signals on the stub that they can see have an entire clock period  to propagate (minus set-up or hold for the FF). In reality this is never the case and so we need to apply good timing constraints for these stubs. This is achieved by time budgeting the path to allocate an appropriate portion of the full path propagation delay to each stub. This is automated in tools like the Certify tool that I know and love. When the path runs across three or more FPGAs with the middle FPGAs possibly acting only as feedthroughs then the task becomes really tricky.

We can talk about time budgeting of combinatorial paths in a future blog but in the meantime, I have a poser for you around the best way to partition an example of a inter-FPGA combinatorial path.

 The simplified example in the diagram below, with a mux feeding multiple sources from two source FPGAs into a separate destination FPGA, is not as uncommon as we’d like to hope. After all, many bus standards used in SoCs are actually complex mux’es, and bus sources and destinations are often partitioned into different FPGAs.

Here, then, is a good example of how partition has happened at a multiplexer. We can see that the sources for the mux are in two seperate FPGAs and the destination is in a third. The next task is to assign the multiplexer. The question for you is; where is the best place to put the multiplexer, including how and why?

Please use the comment box below to give us your replies.

I’ll explain my favourite answer in our next blog in any case.

All the best,


Posted in Uncategorized | 6 Comments »