Committed to Memory


Breaking down another memory wall

People are sometimes surprised when I tell them that more than $40 Billion of DRAM chips are sold every year – but have you ever wondered where they all go?

The answer, once the obvious needs of personal, mobile and consumer devices have been met, is that many chips go into racks of servers and storage machines in datacenters and cloud computing facilities.

Servers and storage machines in particular need large amounts of DRAM – the more DRAM they have, the larger the datasets they can work on, and the faster they can produce meaningful results. In the new areas of Big Data and in-memory computing, more memory is allowing these machines to do computation on problems that were too big or too slow to solve in the past, giving new insight into problems in financial analysis, security, situational awareness, retail, and computational biology.

As with many things though, the laws of physics soon get involved. Adding more memory to the DRAM bus is harder than you might think. There are two main culprits that can limit the upper frequency limit of the DRAM bus:
– Additional capacitance, noise and reflections created by additional DIMMs and DIMM sockets on the bus
– The ability of the memory controller to keep track of all the open pages in all the ranks of DRAM on the bus
Further, the Reliability, Availability, and Serviceability – “RAS features” as they are known, are a strong requirement for most of these systems.

Today we announced some new solutions to these issues in this press release and I wanted to give a little bit more information on these solutions to our blog readers.

Two solutions we are providing to the loading issue of capacitance, noise and reflections on the bus is availability of IO equalization on Synopsys’s latest DDR4 solutions along with advanced interface training. The IO equalization allows for larger data eyes particularly on high speed, heavily loaded busses and difficult signal integrity environments. Our training algorithms include techniques to be able to precisely center the data capture on the data eye for the highest margin.

As the number of ranks of DRAM on the bus increases, whether it’s through using Registered DIMM (RDIMM), Load Reduced DIMM (LRDIMM), or 3D TSV based stacking of DDR4 devices (DDR4-3DS), it places a challenge on being able to close timing in the DDR controller. Early DDR4 controllers could support 4 ranks of memory at high speed, however, an LRDIMM solution of 2 slots of 8-rank DDR4 LRDIMM devices has 16 ranks and 256 banks in which any bank could have an open page. The DDR controller needs to respect the memory core timing parameters associated with every bank. Some systems may try to operate in closed-page mode to meet these timing requirements, but this can come with a power and performance penalty for some workloads. Our recent innovation in bank state management allows our memory controller to close timing at the highest speeds of DDR4 in most process technologies while still allowing open page mode access to up to 16 ranks of memory for high performance and lower power.

Finally, on RAS features, one of the new RAS features we’ve announced is the ability to do advanced symbol-based error correction (ECC) which will allow the DRAM controller to reconstruct all of the data in a DRAM device should that device fail to operate. Our technique uses standard 72-bit wide RDIMM or LRDIMM to achieve this function, making it easy to integrate into almost any system. The advanced ECC complements our already strong RAS features like the Command/Address Parity function with Retry, all of which work together to improve the uptime of your datacenter, cloud, and enterprise solutions.

Want to know more? Please click here to visit my recent webinar on DDR4 for Enterprise Applications!


Share and Enjoy:
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS