Posted by Graham Allan on May 28, 2014
A lot has been written about DDR SDRAMs, both the compute variety (DDR3/4) and the mobile variety (LPDDR3/4) and what may come after these technologies run their course. One thing is certain; the future will not be an easy path for DRAMs. The DDR protocol based on a wide parallel bus with single ended signaling and a source synchronous data strobe and non-embedded clock is not scalable beyond the data rates currently specified for these technologies. After DDR4, the world will need something else as the DDR interface cannot realistically be expected to run at data rates higher than 3200Mbps in a traditional computer main memory environment. Unfortunately, that something else will likely be “somethings” else. Likewise, the smartphone’s insatiable need for higher bandwidth from main memory DRAM will also lead to a deviation from the wide parallel bus based DRAM.
Once DDR4 has run its course in computers (which, in my opinion, is really quite a long way off), the most likely candidate to replace it is a SerDes based DRAM such as the Hybrid Memory Cube (HMC), certainly at the higher end of computing such as servers. There is a ton of information available on HMC and the best jumping off point is the HMC consortium page at www.hybridmemorycube.org. Some computing solutions may also seek out the incredibly wide bus (bandwidth = # bits x speed so to get higher bandwidth, you can go wider or go faster) High Bandwidth Memory (HBM) as specified by JEDEC. The complete standard for HBM is available from the JEDEC web site at http://www.jedec.org/standards-documents/docs/jesd235. HBM may also become the eventual successor to the GDDR5 SDRAMs that are used today in high end graphics applications and gaming systems such as the Sony PlayStation 4 (http://www.chipworks.com/en/technical-competitive-analysis/resources/blog/inside-the-sony-ps4/).
In mobile applications, the ultimate successor to LPDDR4 may very well be the Wide IO3 SDRAM. I say Wide IO3 because Wide IO (the first version) gained little market adoption and Wide IO2 will likely lose the vast majority of sockets to LPDDR4. By the time Wide IO3 is fighting it out with LPDDR5 (if such a thing is ever discussed), it may just be time for something new. The WideIO standards are also developed and available from the JEDEC web site (www.jedec.org).
One common element to HMC, HBM and WideIO is the Through Silicon Via (TSV). TSV technology essentially relies on holes being formed through the DRAMs and/or SoC with the connections between them made by hundreds or thousands of short electrical traces connecting the stacked die (http://en.wikipedia.org/wiki/Through-silicon_via). What differentiates the HMC product here is that the TSVs are all internal and become the responsibility of the memory vendor. You purchase the HMC DRAM and put it on a PCB like you do with DDR today so the infrastructure to use it is very simple. When the TSV will be ready for high volume, economical manufacturing is a subject of some other blog. But it is a hurdle, the question is how high is it?
Below is a comparison table of the current DDR3/4 and LPDDR3/4 technologies with Wide IO, HMC and HBM. You can click on it to get an enlarged version. I have tried to ensure the table is correct and some of it is open to interpretation. If you feel that the table is incorrect, incomplete or you have a different opinion, please leave us a comment!
Graham Allan is the Sr. Product Marketing Manager for DDR PHYs at Synopsys. Graham graduated from Carleton University's Electrical Engineering program with a passion for electronics that landed him in the field of DRAM design at Mosaid in Ottawa, Canada. Beginning at the 64Kb capacity, Graham worked on DRAM designs through to the 256Mb generation. Starting in 1992, Graham was a key contributor to the JEDEC standards for SDRAM, DDR SDRAM and DDR3 SDRAM. Graham holds over 20 patents in the field of DRAM and memory design.