When JEDEC released its DDR5 specification (JESD79) back in 2020, the standard setting organization defined precise specs for modules with speed bins of up to 6400 MT/s, while leaving the spec open to further expansions with faster memory as technology progressed. Now, a bit more than three-and-a-half years later, and the standards body and its members are gearing up to release a faster generation of DDR5 memory, which is being laid out in the newly updated JESD79-JC5 specification. The latest iteration of the DDR5 spec defines official DDR timing specifications up to 8800 MT/s, as well as adding some new features when it comes to security.

Diving in, the new specification outlines settings for memory chips (on all types of memory modules) with data transfer rates up to 8800 MT/s (AKA DDR5-8800). This suggests that all members of the JESD79 committee that sets the specs for DDR5 — including memory chip makers and memory controller designers — agree that DDR5-8800 is a viable extension of the DDR5 specification both from performance and cost point of view. Meanwhile, the addition of higher speed bins is perhaps enabled by another JEDEC feature introduced in this latest specification, which is the Self-Refresh Exit Clock Sync for I/O training optimization.

JEDEC DDR5-A Specifications
AnandTech Data Rate
MT/s
CAS Latency (cycles) Absolute Latency (ns) Peak BW
GB/s
DDR5-3200 A 3200 22 22 22 13.75 25.6
DDR5-3600 A 3600 26 26 26 14.44 28.8
DDR5-4000 A 4000 28 28 28 14 32
DDR5-4400 A 4400 32 32 32 14.55 35.2
DDR5-4800 A 4800 34 34 34 14.17 38.4
DDR5-5200 A 5200 38 38 38 14.62 41.6
DDR5-5600 A 5600 40 40 40 14.29 44.8
DDR5-6000 A 6000 42 42 42 14 48
DDR5-6400 A 6400 46 46 46 14.38 51.2
DDR5-6800 A 6800 48 48 48 14.12 54.4
DDR5-7200 A 7200 52 52 52 14.44 57.6
DDR5-7600 A 7600 54 54 54 14.21 60.8
DDR5-8000 A 8000 56 56 56 14 64.0
DDR5-8400 A 8400 60 60 60 14.29 67.2
DDR5-8800 A 8800 62 62 62 14.09 70.4

When it comes to the JEDEC standard for DDR5-8800, it sets relatively loose timings of CL62 62-62 for A-grade devices and CL78 77-77 for lower-end C-grade ICs. Unfortunately, the laws of physics driving DRAM cells have not improved much over the last couple of years (or decades, for that matter), so memory chips still must operate with similar absolute latencies, driving up the relative CAS latency. In this case 14ns remains the gold standard, with CAS latencies at the new speeds being set to hold absolute latencies around that mark. But in exchange for systems willing to wait a bit longer (in terms of cycles) for a result, the new spec improves the standard's peak memory bandwidth by 37.5%.

This of course is just the timings set in the JEDEC specification, which is primarily of concern for server vendors. So we'll have to see just how much harder consumer memory manufacturers can push things for their XMP/EXPO-profiled memory. Extreme overclockers are already hitting speeds as high as 11,240 MT/s with current-generation DRAM chips and CPUs, so there may be some more headroom to play with in the next generation.

Meanwhile, on the security front, the updated spec makes a couple of changes that have been put in place seemingly to address rowhammer-style exploits. The big item here is Per-Row Activation Counting (PRAC), which true to its name, enables DDR5 to keep a count of how often a row has been activated. Using this information, memory controllers can then determine if a memory row has been excessively activated and is at risk of causing a neighboring row's bits to flip, at which point they can back off to let the neighboring row properly refresh and the data re-stabilize.

Notably here, the JEDEC press release doesn't use the rowhammer name at any point (unfortunately, we haven't been able to see the specification itself). But based on the description alone, this is clearly intended to thwart rowhammer attacks, since these normally operate by forcing a bit flip between refreshes through a large number of activations.

Digging a bit deeper, PRAC seems to be based on a recent Intel patent, Perfect Row Hammer Tracking with Multiple Count Increments (US20220121398A1), which describes a very similar mechanism under the name "Perfect row hammer tracking" (PRHT). Notably, the Intel paper calls out that this technique has a performance cost associated with it because it increases the overall row cycle time. Ultimately, as the vulnerability underpinning rowhammer is a matter of physics (cell density) rather than logic, it's not too surprising to see that any mitigation of it comes with a cost.

The updated DDR5 specification also deprecates support for Partial Array Self Refresh (PASR) within the standard, citing security concerns. PASR is primarily aimed at power efficiency for mobile memory to begin with, and as a refresh-related technology, presumably overlaps some with rowhammer – be it a means to attack memory, or an obstruction to defending against rowhammer. Either way, with mobile devices increasingly moving to low-power optimized LPDDR technologies anyhow, the depreciation of PASR does not immediately look like a major concern for consumer devices.

Source: JEDEC

POST A COMMENT

14 Comments

View All Comments

  • DougMcC - Monday, April 22, 2024 - link

    Gotta get your rowhammer AI protection in asap. Otherwise skynet. Reply
  • erotomania - Monday, April 22, 2024 - link

    This is an informative news article. Thanks AT and A.S. Reply
  • Igor_Kavinski - Monday, April 22, 2024 - link

    YESS!!!

    Progress!
    Reply
  • Threska - Tuesday, April 23, 2024 - link

    DDR6 should already have it. Reply
  • sheh - Monday, April 22, 2024 - link

    "depreciate" should be "deprecate". Reply
  • Oxford Guy - Monday, April 22, 2024 - link

    'Extreme overclockers are already hitting speeds as high as 11,240 MT/s with current-generation DRAM chips'

    With actual stability and without goofy things like nitrogen?
    Reply
  • deil - Tuesday, April 23, 2024 - link

    I believe highest was 10k on air, and above is with some chiller. Not sure, maybe I overslept something, but ddr5 is much better than ddr4 in that manner. Reply
  • ZeDestructor - Tuesday, April 23, 2024 - link

    looks like splitting tht e 64/72-bit bus to a pair of 32/40-bit busses and moving power to the DIMMs are paying back quite major dividends. I like this. Reply
  • relaxedjeff - Tuesday, April 23, 2024 - link

    "Using this information, memory controllers can then determine if a memory row has been excessively activated and is at risk of having its bits flipped, at which point they can back off to let the row properly refresh and the data re-stabilize."

    This statement should be
    "Using this information, memory controllers can then determine if a memory row has been excessively activated and is at risk of having its NEIGHBOR'S bits flipped, at which point they can back off to let the NEIGHBOR row properly refresh and the data re-stabilize."

    Access a row (doing an activate, read or write command, precharge) refreshes the row being accessed very reliably (which has always been the case for DRAM). However, activating row N can bleed/inject charge from bits stored in the surrounding rows (row N-1 and row N+1) and rowhammer is attacking those neighboring rows. The extra refreshes are not for row N but rows N-1 and N+1.
    Reply
  • Ryan Smith - Tuesday, April 23, 2024 - link

    You are correct, sir! That was a silly error on our part. Thank you! Reply

Log in

Don't have an account? Sign up now