The Eyes Have It


DDR4 In A Nutshell, Misconceptions, Cool Features, and DDR5

Earlier this week the official DDR4 specification was released by JEDEC, although there was a lot of discussion about the specification over the last couple of years. I’m often asked to provide high level information on the latest interface protocols to help engineers decide when to intergate these functions into an SoC.  To help address these questions we created a video on youtube:

and the following Q&A.

  1. In a nut shell, what is DDR4?
    1. Next JEDEC standard for commodity DRAM after DDR3
    2. Designed for PCs and laptops but embedded applications use it because it will be the cheapest off-chip memory for consumer electronics, networking, even some tablets
    3. Much cheaper than mobile SDRAMs like LPDDR2 and LPDDR3 and Wide IO.
  2. Why did the DDR4 standard take so long to produce?
    1. It’s a wide parallel interface operating at very high data rates
    2. DDR4 is planned to operate up to 3200Mbps à a 72-bit interface = 28.8GB/s of maximum bandwidth
    3. Required a lot of new features to have good enough signal integrity to operate at 3200Mbps
    4. DDR4 is a big step from DDR3, much bigger than DDR3 over DDR2
  3. How will the DDR3 to DDR4 transition compare to the DDR2 to DDR3 transition?
    1. It is much more complex.
    2. DDR3 really only added write leveling for fly-by routing and a prefetch of 8 over DDR2’s prefetch of 4.  Of course, it also reduced the DRAM voltage saving power but that’s not a functional change.
    3. DDR4 adds a lot more features over DDR3 – POD I/O for the data channel, data bus inversion, trained Vref, bank groups, write CRC, etc.
    4. DDR4 adds a lot more subtle restrictions with new timing parameters.  Many of these can kill your performance if you do not carefully design around them
    5. Bank groups are an especially new item that embedded apps will need to deal with.  You want to ping pong between bank groups with DDR4 to maximize bandwidth whereas with DDR3, you don’t care about what banks you ping pong between.
  4. What can customers expect from with DDR4?
    1. Lower power in the DRAM due to the lower voltage supply
    2. Higher data rates
    3. Lower prices once DDR4 becomes a higher volume DRAM versus DDR3
    4. More complex signal integrity requirements
    5. Pretty much a requirement to be in flip chip packaging
  5. What is your favorite new feature of DDR4?
    1. The data bus inversion feature is cool.  Sometimes the best ideas are re-used from the past and this one was used on the Intel P4 front side bus.  Basically, due to the new VDDQ termination, the DDR4 data bits consume a maximum amount of power if they are driven low and none if they are driven high.  DBI limits how many signals can be driven low at any one time to avoid a worst case power situation.
  6. There is a lot of talk about DDR4 being a point to point interface, why is that?
    1. This applies to PC platforms using DIMMs where only one DIMM can be used per channel
    2. Simply put, there is no signal integrity headroom left if two DIMMs are put on one channel when operating up to 3200Mbps.
  7. What do you think are the common misconceptions about DDR4 out there?
    1. Many!
    2. The biggest one is that DDR4 will offer 3200Mbps data rates right away.  As with every DRAM standard, the data rates start out low and build over time as the DRAM vendors transition to finer geometries.
    3. The first JEDEC standard for DDR4 due out later this year only covers data rates up to 2400Mbps.  Higher data rates will be added as the standard gets amended over time.
    4. Another misconception is the POD interface – that only applies to the data channel.  The address command channel is still like DDR3 and uses a mid-point termination and mid-point Vref.
  8. So will there ever be a “DDR5”?
    1. Unlikely.  The era of the wide parallel, single ended data bus has run its course.  You just can’t fight physics and the signal integrity and timing budgets can’t take any more.
    2. New technologies are emerging like the Hybrid Memory Cube and high-bandwidth memory in discussion at JEDEC.  These are technologies that leverage TSVs to create a large amount of DRAM with a very high speed SERDES type interface using a different chip at the bottom of the stack.  Both combine a high-speed logic process technology to create the interface chip with a stack of TSV bonded memory die offering incredible bandwidth to the interface chip.
    3. Wide IO DRAM is another TSV based technology that is envisioned as a future option for mobile compute platforms such as smartphones.
    4. All of these technologies need to wait for the TSV based DRAM stacking to mature and become economically viable, which is likely to happen before DDR4 has run its course.

Share and Enjoy:
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS