2013-02-13

Agilent expert explains DDR4 timing

Agilent expert explains DDR4 timing

When it comes to memory, today’s computing platforms and embedded applications demand more: more capacity, more speed, and more efficiency. In 2000, JEDEC published the DDR1 DRAM standard, which featured a then-speedy top data-transfer rate of 200 MT/s. Fast-forward to September 2012, and the release of the hotly anticipated DDR4 DRAM standard (JESD79-4), which specifies a power-sipping DRAM with an initial transfer rate of up to 2.4 GT/s. Why bother to transfer over your design, you might ask, given that DDR3 devices operating at 2.1 GT/s are broadly available? The DDR4 operating voltage of 1.2 V, compared to 1.5 V for DDR3 is one good reason. Future scalability is another—DDR4 is the first DRAM standard compatible with 3-D architectures, and it currently boasts an end-goal transfer rate of 3.2 GT/s.

It’s not just the number that matters though, but the way the standard approaches high-speed operation that’s important, says JEDEC board member Perry Keller, manager for the digital memory applications program at Agilent Technologies. He covered key aspects of DDR4 timing at the “Making DDR4 work for you” session at DesignCon 2013 (January 28-31; Santa Clara, CA). “We’ve got a broad population of folks who really haven’t had the time or the business need to learn about DDR4,” Keller says. “What we hope to do is familiarize them with DDR4: What it is, why it exists, what it can bring to their products, and how to do something practical with it.” Indeed, DDR4 brings a host of benefits, including dramatically reduced power demand and compatibility with 3-D architectures. It also corrects a key weakness of earlier versions.

The first three generations of DDR were based on a simplifying assumption, which was that meeting spec on key parameters like setup time (tDS) and hold time (tDH) was enough to guarantee perfect data capture and zero bit-error rate (BER). Of course, when it comes engineering, there’s no such thing as perfect or zero, just good enough for practical purposes. For the initial generations of DDR, random jitter was sufficiently low that adding a small error margin was enough to ensure adequate performance; at the DDR1 rate of 200 MT/s, for example, a standard 50-ps margin accounts for 4.17% of the 1200-ps data valid window. At 1.6 GT/s, the initial end-goal transfer rate for DDR3, random jitter increases substantially and our margin now consumes 40% of the data valid window. As a result, increasing speeds forced designers to compensate by over engineering controllers, DRAMs, and systems, adding cost, engineering time, and complexity.

When the JEDEC committees began to craft the DDR4 specification, they knew they had to approach the problem differently. “DDR4 has probably taken one of the biggest steps in that it’s really moved into what we would call a microwave/digital speed domain,” says Keller, who chairs JEDEC’s validation committee for digital logic (JC 40.5) and the Universal Flash Storage measurement committee (JC 64.5). “It requires a similar evolution in thinking about how the controller PHY is developed, how the system is designed, and how people validate and measure system performance.”

The DDR4 standard addresses the issue of random and deterministic jitter up front. Instead of defining TS and TH, it specifies the data valid window. The standard defines an eye mask that scales depending on bit-error rate (BER). It’s a common approach for many other high-speed applications but it represents a new way of thinking for the memory community. “The basic measurement platforms have the capability we need. What’s really changing is some of the theory,” says Keller. “We’re taking concepts that have been applied in high-speed RF/microwave design and translating them nto things that make sense in the memory world so that it’s practical to implement and testable on the line.”

Right now, DDR3 DRAM dominates the market, but look for that to change faster than you might imagine. The time it takes a generation of DRAM to move from initial release through to embedded designs has shrunk dramatically over the past decade. Up until DDR3, the transition from server to set-top box designs incorporating the next generation of DRAM took anywhere from 18 to 24 months. For DDR3, the shift took less than a year. Based on attendance at events like the DesignCon DDR4 forum and the JEDEC workshops, Keller expects DDR4 to make the move even faster. “Most of these guys coming into these workshops are high-end embedded systems people designing routers, network switches, set-top boxes, and battery-operated devices. They want to understand what’s coming. Even though DDR4 isn’t going to be used in mobile systems, it is going to be used in a lot of embedded systems and probably more quickly than with any previous generation.”
TAG:DDR3 DDR4 DRAM memory test equipment jitter JEDEC Memory Designline

No comments:

Post a Comment