HOME    COMMUNITY    BLOGS & FORUMS    Committed to Memory
Committed to Memory
  • About

    This memorable blog is about DRAM in all its forms, especially the latest standards: DDR3, DDR4, LPDDR3 and LPDDR4. Nothing is off limits--the memory market, industry trends, technical advances, system-level issues, signal integrity, emerging standards, design IP, solutions to common problems, and other stories from the always entertaining memory industry.
  • The Authors

    Graham Allan

    Graham Allan is the Sr. Product Marketing Manager for DDR PHYs at Synopsys. Graham graduated from Carleton University's Electrical Engineering program with a passion for electronics that landed him in the field of DRAM design at Mosaid in Ottawa, Canada. Beginning at the 64Kb capacity, Graham worked on DRAM designs through to the 256Mb generation. Starting in 1992, Graham was a key contributor to the JEDEC standards for SDRAM, DDR SDRAM and DDR3 SDRAM. Graham holds over 20 patents in the field of DRAM and memory design.

    Marc Greenberg

    Marc Greenberg is the Director of Product Marketing for DDR Controller IP at Synopsys. Marc has 10 years of experience working with DDR Design IP and has held Technical and Product Marketing positions at Denali and Cadence. Marc has a further 10 years experience at Motorola in IP creation, IP management, and SoC Methodology roles in Europe and the USA. Marc holds a five-year Masters degree in Electronics from the University of Edinburgh in Scotland.

The importance of ADAS

Posted by Marc Greenberg on June 11th, 2016

Dear readers, I’ll warn you in advance that this blog post is sad and somewhat personal. If that’s not what you want to read right now, check out one of my other blog posts.

By now you have probably seen the TV commercials showing the latest model vehicles that can automatically initiate braking when they detect a collision is imminent. Technologies like this are known as “Advanced Driver Assistance Systems” (ADAS).

Over the last while, I’ve been meeting with automotive silicon vendors to discuss automotive safety requirements for DDR – especially LPDDR4 – and compliance to standards like ASIL, ISO26262 and AEC Q100. The requirements of meeting these standards are difficult and time-consuming, and although our processes at Synopsys are relatively in-line with these standards, there are many documentation and record-keeping tasks plus some extra engineering work and new features we have implemented to be compliant with the automotive standards.

Automotive compliance is quite a task, and it’s a popular topic. Two of the largest crowds I saw at the Design Automation Conference (DAC) this week were at the Synopsys booth listening to presentations on Synopsys’s Automotive IP portfolio and our new transient fault validation tools that are now part of Synopsys through our WinterLogic acquisition.

The goal is to allow vehicles to implement “Advanced Driver Assistance Systems” (ADAS) in the very near term and eventually achieve the goal of self-driving cars. As these ADAS chips will effectively be in control of the car, it’s necessary that they have a high level of fault detection in case it’s ever necessary for the ADAS to rescind control of the vehicle back to the human driver because of a silicon or sensor failure.

Given that neither my Wife nor I have ever been in a forward collision, I had been thinking of ADAS technology in general as “interesting but not necessarily mandatory”. As of last night, my opinion of ADAS has changed.

Last night I learned that Alexei Bauereis was struck by an SUV while walking his bicycle across a crosswalk. Alexei died later in the hospital. He was 14 years old.

This hits particularly close to home for me for a number of reasons: I used to work closely with Alexei’s father, Eric Bauereis, when we were both at Denali. I had met Alexei and watched him play hockey in an exhibition match with the University of Texas hockey team. Alexei was just one year older than my son. The accident happened at an intersection I travel through regularly, and it’s just a block away from my son’s school. On another night, at another intersection, that could be my own son. I can’t imagine the pain the Bauereis family must be going through.

All I know of the driver of the SUV is from media reports, the driver of the SUV stopped to render aid and they have not as yet been charged by the police. And they probably feel terrible right now.

So back to the topic of ADAS. I’ve read studies that show that the majority of drivers feel that everyone else on the road is a bad driver. So ADAS is something that you want for everyone else on the road to have so that they won’t run into you or your family.

We the engineering community have the technology to make ADAS pervasive in new vehicles – let’s do it fast, let’s do it right, let’s make it at a price point that’s accessible to every purchaser of a new car, and let’s make it reliable to last the lifetime of the vehicle. Let’s make the type of accident that killed Alexei Bauereis a thing of the past.

Alexei, may you forever dance with the stars.

Marc

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in Uncategorized | Comments Off

Breaking down another memory wall

Posted by Marc Greenberg on June 8th, 2016

People are sometimes surprised when I tell them that more than $40 Billion of DRAM chips are sold every year – but have you ever wondered where they all go?

The answer, once the obvious needs of personal, mobile and consumer devices have been met, is that many chips go into racks of servers and storage machines in datacenters and cloud computing facilities.

Servers and storage machines in particular need large amounts of DRAM – the more DRAM they have, the larger the datasets they can work on, and the faster they can produce meaningful results. In the new areas of Big Data and in-memory computing, more memory is allowing these machines to do computation on problems that were too big or too slow to solve in the past, giving new insight into problems in financial analysis, security, situational awareness, retail, and computational biology.

As with many things though, the laws of physics soon get involved. Adding more memory to the DRAM bus is harder than you might think. There are two main culprits that can limit the upper frequency limit of the DRAM bus:
- Additional capacitance, noise and reflections created by additional DIMMs and DIMM sockets on the bus
- The ability of the memory controller to keep track of all the open pages in all the ranks of DRAM on the bus
Further, the Reliability, Availability, and Serviceability – “RAS features” as they are known, are a strong requirement for most of these systems.

Today we announced some new solutions to these issues in this press release and I wanted to give a little bit more information on these solutions to our blog readers.

Two solutions we are providing to the loading issue of capacitance, noise and reflections on the bus is availability of IO equalization on Synopsys’s latest DDR4 solutions along with advanced interface training. The IO equalization allows for larger data eyes particularly on high speed, heavily loaded busses and difficult signal integrity environments. Our training algorithms include techniques to be able to precisely center the data capture on the data eye for the highest margin.

As the number of ranks of DRAM on the bus increases, whether it’s through using Registered DIMM (RDIMM), Load Reduced DIMM (LRDIMM), or 3D TSV based stacking of DDR4 devices (DDR4-3DS), it places a challenge on being able to close timing in the DDR controller. Early DDR4 controllers could support 4 ranks of memory at high speed, however, an LRDIMM solution of 2 slots of 8-rank DDR4 LRDIMM devices has 16 ranks and 256 banks in which any bank could have an open page. The DDR controller needs to respect the memory core timing parameters associated with every bank. Some systems may try to operate in closed-page mode to meet these timing requirements, but this can come with a power and performance penalty for some workloads. Our recent innovation in bank state management allows our memory controller to close timing at the highest speeds of DDR4 in most process technologies while still allowing open page mode access to up to 16 ranks of memory for high performance and lower power.

Finally, on RAS features, one of the new RAS features we’ve announced is the ability to do advanced symbol-based error correction (ECC) which will allow the DRAM controller to reconstruct all of the data in a DRAM device should that device fail to operate. Our technique uses standard 72-bit wide RDIMM or LRDIMM to achieve this function, making it easy to integrate into almost any system. The advanced ECC complements our already strong RAS features like the Command/Address Parity function with Retry, all of which work together to improve the uptime of your datacenter, cloud, and enterprise solutions.

Want to know more? Please click here to visit my recent webinar on DDR4 for Enterprise Applications!

Marc

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR Controller, DDR4, DIMM, IP, Signal Integrity | 2 Comments »

Apple iPhone 6S: LPDDR4 arrives at Apple

Posted by Marc Greenberg on September 29th, 2015

As reported by Chipworks last week, the Apple iPhone 6S is using 2GB of LPDDR4 DRAM. This means that Apple is now joining other phones such as the LG Gflex2, the Samsung Galaxy S6, Xiaomi Mi Note Pro, HTC OneM9, and several others in using LPDDR4 RAM.

The decision to use LPDDR4 appears to be a costly one. Teardown.com reports that LPDDR4 was a significant cost adder to the iPhone6S, raising the cost of DRAM from $4.50 for the 1GB LPDDR3 used in the iPhone 6, up to $16 for the 2GB of LPDDR4 in the iPhone 6S.

I expect the benefit of adding LPDDR4 to be substantial. The LPDDR4 part used is a Micron MT53B256M64D2NL if you read the Chipworks report, an unnamed Samsung part if you read the Teardown.com report or a Samsung K3RG1G10BM-BGCH if you read the iFixit report. A little sleuthing on the Micron website using the FBGA part number decoder and part numbering guide shows this to be a 2-rank LPDDR4 part, 64-bits wide (2 16-bit channels per die) capable of 1600MHz operation (3200MT/s data rate per pin) and a total of 2GB of RAM.

Generally we expect much of the system improvement in the iPhone 6S to come from the doubling of DRAM capacity and the substantially higher memory bandwidth that comes from LPDDR4 that has about 70% more peak bandwidth than the LPDDR3 device it replaces in the iPhone6.

I am looking forward to getting my hands on one…

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR Controller, DDR PHY, LPDDR3, LPDDR4 | Comments Off

3D XPoint Technology: More details revealed

Posted by Marc Greenberg on August 19th, 2015

There was a huge technology announcement on 3D XPoint(tm) technology about 3 weeks ago – but without many details. I’m at the IDF2015 conference in San Francisco this week and we learned a lot more.

During the big keynote presentation at IDF, 3D Xpoint(tm) was discussed for about 7 minutes, with information we had heard before – 1000x faster than NAND flash, more dense than DRAM – and then we saw a live demo of an SSD containing the 3D XPoint technology. The 3D XPoint SSD gave about 7X more IOPS than the fastest NAND SSD… They also announced that the name for products based on 3D XPoint is “Optane”. Finally, a surprise announcement, that Optane products would be available in 2016 SSD format, as well as in a DDR4 DIMM format.

The DIMM form factor is not a total industry secret… one of my contacts at an OEM said he had been discussing it with Micron for some time, but the NDAs are holding up pretty well…

Then we got more details in a packed session at the end of the day with Rob Crooke (SVP & GM of Memory Solutions Group) and Al Fazio (Senior Fellow and Director, Memory Technology Development) that discussed the 3D XPoint technology. While some of it was scripted, much of the useful information came in about 30 minutes at the end where they answered MANY questions from the floor (which are not in the slides of course). You can get the presentation of the scripted part from http://www.intel.com/idfsessionsSF and search for session SPCS006. Most of he questions from the floor were artfully crafted to find out more without asking direct questions that couldn’t be answered.

The details below assume you know a little something about this technology already. If you want to understand the basics, there’s a great infographic at http://www.intelsalestraining.com/memorytimeline/

I’m going to repeat the information as I heard it, without speculation. If you want speculation or analysis, come talk to me personally…

The session started with a quote from van Neumann in 1946 where he predicted a tiered memory system. XPoint seems to be an obvious new tier in the memory for certain applications.

Manufacturing Technology:
- Rob and Al showed a wafer of 128Gb on 20nm process and said there is working silicon (and a working SSD was shown in the keynote). The dies on the wafer appeared to be about the maximum size for a die, probably 20-25mm from where I was sitting.
- The memory is built above the metallization layer. There is a switch element and a storage element between alternating metallization layers. Sense amps and other structures are in the silicon below.
- Because the memory is all above the metallization layer, one could theoretically could have logic (ie., a CPU) in the silicon under Xpoint. But, since the memory is using all the metal layers, no benefit in using the underlying silicon because you can’t connect it to anything.
- How many decks (or layers) are there on the die? The first generation is 2 decks. Future will be more decks. They said an economical number will be 4-8 decks.
- It shrinks lithographically, which means that Moore’s Law can continue

Use in the system:
- Earlier in the day during the keynote presentation, it was announced that there would be both SSDs and DIMMs based on the 3D XPoint technology
- 3D Xpoint SSDs will use NVMe as the interface technology. They claim the whole reason they pushed transition from SATA/SAS to NVMe because they knew the 3D Xpoint technology was coming…
- In the keynote, they showed 7X system performance by replacing a very fast NAND SSD with Xpoint. In this session they answered, ‘why only 7X’? The answer is Amdahl’s law – by increasing the speed of SSD accesses, they moved the bottleneck to somewhere else in the system. So to take full advantage of 3D XPoint technology, they need to move it to the DRAM bus, that’s why there’s DIMMs.
- The next generation Xeon (Gen 8) processor will support 3D Xpoint / Optane on the DDR bus in 2016. There will be a multi-tiered memory system on the DRAM bus, so there will be one DDR4 DIMM used as write-back cache, and one DIMM as 3D Xpoint (ie. 2 DIMMs per Channel) and will provide 4X the system memory capacity at lower cost than DRAM. They don’t anticipate that it will interfere with the performance of the DRAM.
- The technology requires optimized memory controller when on the DRAM bus in DIMM form.
- Latency is still ~10X greater than DRAM… but ~1000X better than NAND Flash (directly). In an SSD, expect ~10X latency improvement compared to NAND SSD.

Value proposition for 3D XPoint:
- Claiming ~10x larger latency than DRAM and ~10X capacity compared to DRAM (therefore, larger and theoretically less expensive than DRAM – but nobody is claiming this will be as cheap as NAND Flash)
- Claiming ~1000X write endurance improvement over NAND Flash (therefore, much slower to wear out, can be used in different applications than NAND)
- Random Access like DRAM (unlike block access of NAND)
- Non-Volatile like Flash

Theory of operation and other questions that people had:
- Unlike other memories which generally rely on stored charge, 3D XPoint relies on bulk properties of the material between the word and bit lines.
- It has some similarity to ReRAM, in that it changes resistance. But it is not filamentary – ReRAM is filamentary. This is bulk property of the material that changes.
- The XPoint technology is capable of MLC operation, but first generation is not MLC. Need to get manufacturing variability resolved before MLC comes into the picture. It took many years to go from SLC NAND to MLC NAND.
- Retention is measured in years.
- There are temperature ranges, they won’t say what, but it will be “normal limits”.
- The devices can be re-flow soldered.
- It has predictable, low latency. The write latency is deterministic. Reads are also deterministic. (Like DRAM, unlike NAND)
- Electrically and Physically compatible with DDR4, but requires a new controller.
- Both read and write are 1000X faster than NAND
- It does require wear leveling
- Ultimately the 3D XPoint technology could go into the embedded space. M.2 SSD Form Factor is good for embedded. Long term the 3D Xpoint could be provided in BGA form factor.
- No additional ECC required.

And finally, that amazingly prescient quote from the year 1946:

“ Ideally one would desire an indefinitely large memory capacity such that any particular … word would be immediately available. … It does not seem possible physically to achieve such a capacity. We are therefore forcedto recognize the possibility of constructing a hierarchy of memories, each of which has greater capacity than the preceding but which is less quickly accessible.”
Preliminary Discussion of the Logical Design of an Electronic Computing Instrument
Arthur Burks, Herman Goldstine and John von Neumann, 1946

Once again, this is the information as I heard it. I’m not perfect. There are no guarantees on the technical specifications or the dates. I am specifically not commenting on Synopsys’s plans for this technology.

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR Controller, DDR PHY, DDR4, DIMM, DRAM Industry, Uncategorized | 1 Comment »

Samsung DDR4-3DS 3D Stacked DIMMs using Through Silicon Vias (TSV)

Posted by Marc Greenberg on July 8th, 2015

3DS Package - Concept View

3DS Package – Concept View

It’s been about 9 months since I blogged on Samsung’s public roadmap and the fact that it carried some 3D Stacked DDR4 Devices using Through Silicon Vias (TSVs). Time for a quick update…

Samsung’s website indicates that the M393A8G40D40 64GB DDR4 DIMM with 3DS TSV is in Mass Production status. The datasheet for the DIMM gives a little more insight into what’s going on.

We do know that some of these devices are out there, as both Chipworks and Techinsights have sliced them up, X-rayed them, and generally exposed their secrets.

I’ve been looking for places that I could buy one of the DIMMs – to get an idea of the cost as much as anything – for a while. I was recently able to find an online price at Amazon.com for the 64GB TSV DIMM – only $1699.00 (with free shipping!). For comparison, on the same day, a similar but half capacity 32GB Samsung DIMM based on non-stacked 8Gb DDR4 dies was $352.50 on Amazon.com . Prices may fluctuate by the time you read this of course. The short summary is that the TSV devices offer 2X the capacity at over 4X the cost (on the day I looked).

I don’t want to give anyone the impression that this price differential on TSV stacked devices will exist forever. In fact, I recently blogged that the cost/benefit on 3D Stacked HBM devices is almost balanced. The DDR4 3DS Devices are among the first of their kind and carry a premium that may be as much to do with their rarity as their cost of production.

So why would anyone consider these TSV devices? Well, there’s a few good reasons:
– Building the highest capacity servers starts with the highest capacity DIMMs. If your DIMM sockets are already ‘maxed out’ with non-stacked x4 DIMMs carrying 8Gb dies, then ‘the only way is up’.
– Providing more capacity in less unit volume compared to adding more packages or more DIMMs. This can be critical for devices like enterprise-class SSDs where PCB area and volume is at a premium.
– Potentially improved performance compared to multirank solutions. The structure of 3DS devices means that there is the potential for intra-stack operations to happen with less delay than inter-rank operations. This can improve performance in some applications (in-memory computing, for example).
– Adding capacity without adding bus loading. You may not have ‘maxed out’ the bus, but you may want to add capacity without adding additional bus load. Reducing load on the bus will tend to decrease the power substantially and increase the maximum achievable frequency in that system. A key feature of the 3DS packages is that they present a single load to the bus regardless of how many dies are in the stack.

It’s that last point that confuses a lot of people. How can you have a stack of 4 dies that only presents one load to the bus? The picture above helps to explain. Inside the DDR4 3DS package, there is typically only one physical interface (Clock, Command, Address, Data, Data strobes, etc) on a master die that is connected to the outside of the package, and all the DRAM traffic to the master die and all of the slave dies inside the package go through that one physical interface on the master die. Inter-die communication within the stack from the master to the slaves is carried on the through silicon vias (TSVs) through the stack.

So there you have it: DDR4 3DS Devices – increased DRAM capacity without increasing PCB area or bus loading, at a price.

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR4, DIMM, DRAM Industry, Uncategorized | Comments Off

Do you need DDR4 Write CRC?

Posted by Marc Greenberg on June 24th, 2015

A customer asked us, “Do I need DDR4 write CRC beyond a certain frequency?”

The answer is far from simple; it’s dependent on many factors including the type of system it is, the other types of error correction (ECC) that may be in use, the system’s tolerance of errors, and the system’s ability to spare the bandwidth required for the write CRC function. Since I’ve been asked a few times and since the answer is so complex, I created the flowchart here to show some paths through the possible choices.

Write CRC was added to the JEDEC Standard for DDR4 (JESD79-4), the first time that DDR had any kind of function like this. The basic premise is that the SoC memory controller generates a Cyclical Redundancy Check (CRC) code from the data transmitted in a write burst and then transmits that CRC code following the data. The CRC code received from the memory controller is not stored in the DRAM, rather, the DRAM checks the CRC code it received against the data that the DRAM received; if there’s a mismatch detected then the DRAM asserts the ALERT_n pin shortly after the write, indicating that a problem has occurred. The system may then choose to retransmit the data or follow some error recovery procedure (Synopsys’s uMCTL2 memory controller can automatically retry the write transaction).

Write CRC can consume up to 25% of the total write bandwidth in the system, making it a rather “expensive” function. Many people wonder if it’s worth it. There’s a much longer discussion required on why and how it’s been implemented, but instead of making a really long blog post, here is the summary in “a picture is worth a thousand words” format! Please click on the image to fully expand it.

For more information on DDR4 RAS (Reliability, Availability, Serviceability) topics, please check out my whitepaper at https://www.synopsys.com/dw/doc.php/wp/ddr_ras_for_memory_interfaces_wp.pdf

Do you need Write CRC for DDR4?

Do you need Write CRC for DDR4?

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR Controller, DDR4, DRAM Industry, Signal Integrity, Uncategorized | Comments Off

AMD GPUs using new HBM DRAM (and the cost/benefit appears balanced!)

Posted by Marc Greenberg on June 17th, 2015

AMD announced their new line of GPUs are using the new HBM (High Bandwidth Memory) DRAM technology yesterday. I have known these were coming for a while but the thing that surprised me the most was the relatively reasonable cost for the performance that they deliver – at least, the relationship between cost and benefit of adding HBM to the system appears to be almost linear.

The high-end GPU using HBM, the Radeon R9 Fury X, has a recommended price of $649 and has 512GB/s of DRAM bandwidth to 4GB of HBM DRAM connected to 4096 stream processing units. (source)

The nearest GDDR5-based system, the Radeon R9 390X, has a recommended price of $429 and has 384GB/s of DRAM bandwidth to 8GB of GDDR5 DRAM connected to 2816 stream processing units. (source)

So the Radeon R9 Fury X has about 33% more memory bandwidth and 45% more stream processing units than the 390X for about 50% more recommended retail cost. I am assuming the AMD engineers did their homework to balance the number of stream processing units with the bandwidth available, and therefore we could assume that they are using the available bandwidth from the HBM memory more efficiently than they did with GDDR5. Yes, there is half the amount of DRAM capacity available, but in general the GPU applications are more limited by bandwidth than capacity, and the 8GB of DRAM in the R9 390X may partially unused just because that’s how much capacity they needed to buy to get the number of pins required to transmit the 384GB/s bandwidth in the 390X. We’ll need to look at the relative performance analyses that come out from the gamer labs across the internet but on the face of it, it looks like there is a pretty linear relationship between cost and benefit when adding the HBM technology to the system.

Then there’s the issue of heat. The Fury X is capable of doing more work than the 390X and therefore you might expect it to get hotter. To that end, AMD’s specification sheet says that the Fury X is liquid cooled. If you’ve heard me talk about heating in DRAM devices before, then you’ve heard my super-secret retirement plan: I’ll retire wealthy as soon as I invent a DRAM that works better when it’s hot!

I have had concerns over HBM in the past for the reason that DRAMs don’t like to be hot, and one of the last places I would choose to put a DRAM device would be in close thermal proximity to a high performance computing element. Unfortunately the nature of HBM – and one of the reasons it can provide so much bandwidth – is it’s requirement to be placed close to the computing element it serves. So it appears that AMD have addressed this with liquid cooling and some of the cost of the Radeon R9 Fury X may be due to the liquid cooling system rather than the cost of the memory.

Finally, it’s very important to note that this cost/benefit relationship applies to AMD (and specifically AMD GPUs) and not for every system out there – you couldn’t build HBM into a low-volume Enterprise product and expect the same cost/benefit. AMD can benefit from their consumer volume pricing on DRAM, their consumer-speed inventory turns, and they can amortize the NRE cost of the silicon interposer required for HBM across a large volume of devices. Someone building a lower volume product with longer inventory turns could expect a very different cost/benefit…

You can read the AMD press release here: http://www.amd.com/en-us/press-releases/Pages/new-era-pc-gaming-2015jun16.aspx

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in HBM, High Bandwidth Memory, HMC, Hybrid Memory Cube | Comments Off

Want to learn about DDR VIP?

Posted by Marc Greenberg on June 15th, 2015

Our friends in Synopsys’s Verification Group have been putting together an excellent set of Memory Verification IP (VIP) for DDR4, DDR3, LPDDR4, LPDDR3, that complements our other VIP for Flash, MIPI, PCIe, AMBA, Ethernet, HDMI, SATA, etc…

The verification folks going on the road to tell people about our memory VIP in a series of seminars in Marlborough, Irvine, Mountain View, Austin, Phoenix (June 2015) and Herzelia (July 2015).

This is a great, hands-on way to learn about the memory checkers and monitors that are available, how to configure testbenches, and then how to debug and extract coverage. There’s even a free lunch!

Seating is limited and not everyone will be accepted so please visit the webpage at http://www.synopsys.com/Tools/Verification/FunctionalVerification/Pages/memory-vip-workshops.aspx for more information on how to sign up.

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR Controller, DDR PHY, DDR3, DDR4, DRAM Industry, HBM, High Bandwidth Memory, HMC, Hybrid Memory Cube, LPDDR3, LPDDR4 | Comments Off

LPDDR4 is here! Samsung Galaxy S6 and LG Gflex 2 released

Posted by Marc Greenberg on April 10th, 2015

Faster than most people expected, LPDDR4 is here and shipping in two products!

LG launched the LG Gflex2 Phone powered by the Qualcomm Snapdragon S810 processor with LPDDR4 DRAM in Korea earlier this year, followed by a global rollout in February and March.

Samsung made a big event of the launch of the Galaxy S6 today (April 10th, 2015) making the S6 and S6 Edge available at multiple US and international retailers simultaneously. The Galaxy S6 is based around Samsung’s own Exynos 7420 application processor and LPDDR4 DRAM.

Both appear to be using dual-die or quad-die LPDDR4 packages in a “2×32″ (two 32-bit channels) configuration – one of the configurations I have been suggesting in my webinar on LPDDR4, “What the LPDDR4 Multi-Channel Architecture Can Do for You”.

For those interested in memory, this is way ahead of the curve. I had predicted (in an internal email to my colleagues dated September 2013) that the first LPDDR4 product would be something called a Samsung Galaxy S6 in September 2015. At the time, I think people thought I was a bit too aggressive with that prediction. I repeated that prediction right around the time of this blog entry last year. Graham commented how fast the LPDDR4 standard had been published in comparison to other JEDEC DRAM standards last year. It turns out that my prediction of the first shipping product in September 2015 was not aggressive enough – it’s April and we have two!

Congratulations to LG and Samsung for getting these products out, there must have been many technical hurdles to pass in achieving this impressive technical achievement.

Anyone want to take bets on the first LPDDR5 product…?

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR Controller, DDR PHY, DDR4, DRAM Industry, Low Power, LPDDR4, Uncategorized | Comments Off

Row Hammering: What it is, and how hackers could use it to gain access to your system

Posted by Marc Greenberg on March 9th, 2015

I have written on the topic of Row Hammering in a White Paper I published last year (link here) but since it is in the spotlight recently I thought I’d dedicate a blog entry to it. I had never considered this to be a security hole until this morning.

This morning Google Project Zero – the same team that discovered the Heartbleed bug – published this blog entry, “Exploiting the DRAM rowhammer bug to gain kernel privileges”
The blog entry is very detailed so here’s a short summary:
- Some DDR devices have a property called “row hammer” that can cause some bits in DRAM to flip under certain conditions
- The conditions that cause row hammering are so rare in normal operation that nobody even knew it could happen until relatively recently
- Some researchers discovered ways of making row hammering bit flips happen more often
- Google Project Zero reported that user code that has access to unprotected regions of the operating system that link to protected regions of memory may be row hammered to get unprotected access to the whole memory
- Once a hacker has unprotected access to the whole memory, they can do pretty much anything they want with your system

Google has tried their technique on 29 machines and found that they could initiate bit flips on 15 of them with some software utilities they wrote to exploit row hammering.

Google may have already patched the Chrome browser to help prevent this issue

What happens next? Well, at a minimum, we’ll probably all need browser and operating system patches to prevent row hammering exploits. It may be possible to program the BIOS in your system to refresh the DRAM more often which could help to reduce the probability that row hammering would work on your system (at the cost of more power usage and lower performance though).

Looking forward to DDR4, Row Hammering may be a thing of the past. Samsung announced in May 2014 that their DDR4 memory would not be susceptible to Row Hammering because they implement Targeted Row Refresh (TRR) – the cure to Row Hammering – inside of their devices: and Micron’s datasheets say, “Micron’s DDR4 devices automatically perform TRR mode in the background.” There’s some evidence that next-generation CPUs will either not be capable of issuing row hammering data patterns, or may mitigate them with TRR, or both.

As always, browse safely and keep your software up to date!

Some updates since I wrote this post:
– It appears that this may affect primarily consumer machines – those without ECC DRAM. It would be much harder to make this exploit workable with ECC DRAM used in servers and enterprise-class machines. It would be harder still to induce the error in machines supporting ECC patrol scrub. (Note: Synopsys’s uMCTL2 memory controller supports both ECC and ECC patrol scrub)
Cisco published some useful information on how to mitigate the Row Hammer issue. In that blog entry, Cisco reports that Intel’s Ivy Bridge, Haswell, and Broadwell server chipsets support Target Row Refresh capability.
IBM has published a list of their machines that are not affected by the issue
TechTarget quoted my blog in their report on the issue – an excellent article by Michael Heller

Share and Enjoy:
  • del.icio.us
  • Digg
  • Facebook
  • Google Bookmarks
  • Print
  • Twitter
  • StumbleUpon
  • LinkedIn
  • RSS

Posted in DDR3, DDR4, DIMM, DRAM Industry, Signal Integrity, Uncategorized | 1 Comment »