Posted by Richard Solomon on July 29, 2013
Wow, I feel all official and everything now that my name is up in genuine electrons here! First I am compelled to correct Scott… I do have an appreciation for red wine. Without it there would be no red wine vinegar – which goes great on fries (chips for my EU friends)!
One question which came up recently in several customer discussions is one which boils down to “What is DMA and what good is it to me?” To answer that, we need to look at a bit of context. DMA stands for “Direct Memory Access” and originally a “DMA controller” referred to a separate chip which could be used by a CPU to move blocks of data around in memory without using a lot of CPU cycles. In the context of PCI Express controllers, we no longer have (or want!) a separate chip, but plenty of designers still find uses for a separate block of logic they can use to move data without burdening their CPU(s). Sometimes this is a part of the application-specific logic in a chip, and sometimes it’s a more generic piece.
In its most basic form a DMA controller is given a source address, a destination address, and a count and told to go. Sometimes this collection of information is called a “Scatter/Gather Element” or “SGE” in engineering acronymics. Typically the DMA controller will generate an interrupt of some sort once the work is completed and wait to be programmed with more work. Of course modern DMA controllers have more complex control structures. Typically they can process whole lists of SGEs – called “Scatter/Gather Lists” (SGLs) or just “Linked Lists” – before needing additional programming. Not coincidentally, modern operating systems generate SGLs internally when they’re constructing I/O requests to devices like storage and networking controllers. (Since virtual memory systems tend to scatter a program’s data structures across a range of physical addresses, the operating system would really like the controller hardware to handle all that mess rather than asking the host CPU to split up and move around big chunks of program data.)
Because there are often multiple pieces of application logic wanting to moved data, it’s fairly common for DMA controllers to provide several “channels” or independent sets of controls and logic for moving data. This way the various applications don’t have to arbitrate among themselves for access to a single set of DMA control registers, the designer can give them each their own set. (Yes, this is just like those dual-headed media players they sell for the back seat of your car. “Stop bothering your sister and watch your own movie!”) While you probably want to make sure that brother and sister in the back seat each get equal access to their media players, life may not be so simple in an SoC. In many applications there are inherent priorities among the application tasks you’re designing and here you might wish to make sure one doesn’t hog the PCIe bus! At least in the Synopsys implementation of multi-channel DMA, we can do that. Each DMA channel in our system can be programmed with a “weight” which is used when multiple agents are trying to use the bus at the same time. The weight ensures that the programmer/designer controls the bandwidth division among agents – not the “piggishness” of the agents involved.
Hopefully by now you’re starting to think of your own answer to “What good is DMA to me?” but in case you need a little push, consider a few things:
If you answer “yes” to any of the above, you should consider using a DMA controller in your SoC design.
One of the cool toys I got to play with recently is Synopsys’ eDMA demo setup – which is a complete implementation of our DesignWare IP for PCI Express with eDMA and AXI bridge implemented in our HAPS emulation platform. The demo design includes hardware monitors for a variety of performance metrics, and a nifty Windows application with lots of pretty graphs and gauges to try and make DMA sexy – see for yourself if it succeeds by watching yours truly make a demo video!
Feel free to click on the Comments link below if you have any questions on this or any other PCI Express topic you’d like me to cover. Of course, please click on one of the icons to the left to follow this blog! (If you’re like me and prefer getting an e-mail, note that the RSS Subscribe icon will actually take you to a screen with an option to subscribe via e-mail.)
Thanks for reading!
I’ve been involved in the development of PCI chips dating back to the NCR 53C810 and pre-1.0 versions of the PCI spec so have definitely lived the evolution of PCI Express and PCI since the very beginning! Over the years I have worked on variations of PCI, eventually moving on to architecting and leading the development of the PCI Express and PCI-X interface cores used in LSI’s line of storage RAID controller chips. For the last ten plus years I've also had the honor of serving on the PCI-SIG Board of Directors and holding a variety of officer positions. I am still involved in PCI-SIG workgroups and I present at many PCI-SIG events around the world. Back before the dawn of PCI, I received my B.S.E.E. from Rice University, and hold over two dozen U.S. Patents, many of which relate to PCI technology.