For any user interested in performance, memory speed is an important part of the equation when it comes to building your next system. This can apply to any user, from integrated graphics throughput to gaming and prosumer environments such as finance or oil and gas. Individuals with an opinion on memory speed fall into two broad camps, from saying faster memory has no effect, to the ‘make sure you get at least XYZ’. Following on from our previous Haswell DDR3 scaling coverage, we have now secured enough memory kits to perform a thorough test of the effect of memory speed on DDR4 and Haswell-E.

DDR4 vs. DDR3

On the face of it, direct comparisons between DDR4 and DDR3 are difficult to make. With the switch over from DDR2 to DDR3, there were some platforms that could use both types of memory and we could perform tests on both in the same environment. The current situation with DDR4 limits users to the extreme platform only, where DDR3 is not welcome (except for a few high minimum-order-quantity SKUs which are rarer than hens teeth). The platform dictates the memory compatibility, and the main characteristics of DDR4 are straightforward.

DDR4 brings to the table a lower operating voltage, down from 1.5 volts to 1.2 volts. This is the main characteristic touted by the memory manufacturers and those that use DDR4. It does not sound like a lot, especially when we can be dealing with systems from 300W to 1200W quite easily under Haswell-E. The quoted numbers are a 1-2W saving per module per system, which for a fully laden home-user desktop might approach 15W at the high end of savings over DDR3, but for a server farm with 1000 CPUs, this means a 15kW saving which adds up. The low voltage specification for DDR4L comes down from DDR3L as well, from 1.35 volts to 1.05 volts.

DRAM Comparison
  Low
Voltage
Standard
Voltage
Performance
Voltage
DDR 1.80 V 2.50 V  
DDR2   1.80 V 1.90 V
DDR3 1.35 V 1.50 V 1.65 V
DDR4 1.05 V 1.20 V 1.35 V

The lower voltage is also enhanced by voltage reference ICs before each memory chip in order to ensure that a consistent voltage is applied across each of them individually rather than the whole module at once. With DDR3, a single voltage source was applied across the whole module which can cause a more significant voltage drop, affecting stability. With this new design any voltage drop is IC dependent and can be corrected.

The other main adjustment to make from DDR3 to DDR4 is the rated speed. DDR3 JEDEC specifications started at 800 MTs and moved through to 1600 MTs, while some of the latest Intel DDR3 processors moved up to 1866 and AMD up to 2133. DDR4’s initial JEDEC for most consumer and server platforms is set at 2133 MHz, coupled with an increase in latency, but is designed to ensure that persistent transfers are quicker but overall latency is comparable to that of DDR2 and DDR3. Technically there is a DDR4-1600 specification for scenarios that want the bargain basement memory and are unfazed by actual performance.

As a result of this increase in speed, overall bandwidth is increased as well.

Bandwidth Comparison
  Bus Clock Internal Rate Prefetch Transfer Rate Channel Bandwidth
DDR 100-200 MHz 100-200 MHz 2n 0.20-0.40 GT/s 1.60-3.20 GBps
DDR2 200-533 MHz 100-266 MHz 4n 0.40-1.06 GT/s 3.20-8.50 GBps
DDR3 400-1066 MHz 100-266 MHz 8n 0.80-2.13 GT/s 6.40-17.0 GBps
DDR4 1066-2133 MHz 100-266 MHz 8n 2.13-4.26 GT/s 12.80-25.60 GBps

Latency moves from DDR3-1600 at CL 11 to DDR4-2133 at CL 15, which was an expected jump as JEDEC tends to increase CL by 2 for a jump in frequency. While having a latency of 15 clocks might come across as worse, the fact that the clocks are at 2133 MTs ensures that the overall performance is still comparable. At DDR3-1600 and CL11, time to initiate a read is 13.75 nanoseconds, compared to 14.06 nanoseconds for DDR4-2133 at CL15, which is a 2% jump.

One of the things that will offset the increase in latency is that CL15 seems to be a common standard no matter what frequency the memory is. Currently on the market we are seeing modules range from DDR4-2133 CL15 up to DDR4-3200 CL15 or DDR4-3400 CL16, marking a read latency down to 9.375 nanoseconds. With DDR3, we saw kits of DDR3-2400 CL10 for 8.33 nanoseconds, showing how aggressive memory manufacturing over the lifetime of the product can increase the efficiency.

Another noticeable difference from DDR3 to DDR4 is the design of the module itself.

DDR3 (top) vs DDR4 (bottom)

As with most technology updates notches are shifted in order to ensure that the right product fits in the right hole, but DDR4 changes a bit more than that. DDR4 is now a 288-pin package, moving up from 240-pin in DDR3. As the modules are the same length, this means a reduction in pin-to-pin distance from 1.00 mm to 0.85 mm (with a ±0.13 tolerance), decreasing the overall per-pin contact.

The other big design change is the sticky-out bits in the middle. Moving from pin 35 to pin 47, and back from pin 105 to pin 117, the pin contacts get longer as well as the PCB by 0.5 mm.

This is a gradient change rather than a full quick change:

Initially when dealing with these modules, I had the issue of not actually placing them in the slot correctly when using a motherboard with single sided latches. Over the past couple of weeks it has started to make more sense to place both ends in at the same time due to this protruding design, despite the fact it can be harder to do when on your hands and knees in a case.

Along with the pin size and arrangement, the modules are ever so slightly taller than DDR3 (31.25 mm rather than 30.35mm) to make routing easier, and the PCB is thicker (1.2 mm from 1.0 mm) to allow for more signal layers. This has implications for future designs, which we will mention later in the review.

There are other non-obvious benefits and considerations baked into the DDR4 design to mention.

DDR4 supports a low-power auto self-refresh (listed in the documentation as LPASR) which does the standard thing of refreshing the contents of memory but uses an adaptive algorithm based on temperature in order to avoid signal drift. The refreshing modes of each module will also adjust each array independently as the controller must support a fine-grained optimization routine to also coincide which parts of the memory are being used. This has power as well as stability implications for the long term future of DDR4 design.

Module training when the system boots is also a key feature of DDR4. During the start-up routine, the system must sweep through reference voltages to find a maximum passing window for the speeds selected rather than just apply the voltage in the options. The training will go through the voltage reference in steps from 0.5% of the VDDQ (typically 1.2V) to 0.8% and the set tolerance of the module must be within 1.625%. Calibration errors are plausible at one step size (9.6 mV at 1.2V) but also the slew margin loss due to calibration error must also be considered. This is due to the greater implication of losses due to margins and tolerances and ensures stable operation during use. The downside to the user is that the number of modules in the system effects the boot time of the device. A fully laden quad-channel Haswell-E system adds another 5-8 seconds to perform this procedure, and it is something that cannot be circumvented through a different routine without disregarding part of the specifications.

Source: Altera

DDR4 is also designed with the future in mind. Current memory on the market, except what we saw with Intelligent Memory, is a monolithic die solution. The base JEDEC specification will allow for 3D stacking of dies with through-silicon-vias (TSVs) should any memory manufacturer wish to go down this route to increase module density. To support this adjustment there are 3 chip select signals, bringing the total of bank select bits to 7 for a total of 128 possible banks. At current UDIMM specifications, there is provision for up to 8 stacked dies, however DDR4 is listed only to support x4/x8/x16 ICs with capacities of 2, 4, 8 and 16 Gibit (gibibit). This would suggest that the stacked die configuration is more suited to devices where x-y dimensions are a premium, or in the server markets. When it comes to higher capacity modules, we have already reported that 16GB UDIMMs should be coming to market, representing an 8*16Gb dual rank arrangement. We are working to make sure we can report on these as soon as they land, however when it comes to higher density UDIMM parts (i.e. not RDIMM or LRDIMM) we might have to start looking at newer technologies.

There are a significant number of other differences between DDR4 and DDR3, but most of these lie in the electronic engineer/design role for the memory and motherboard manufacturers, such as signal termination, extra programmable latencies and internal register adjustment. For a more in-depth read into these, a good Google search can yield results, although a thorough understanding of Rajinder Gill’s AnandTech piece about ‘Everything You Always Wanted To Know About SDRAM But Were Afraid To Ask’ is a great place to start about general memory operation. I still go back and refer to that piece more frequently than I admit, and end up scratching my head until I reach bone.

Testing The Kits and The Markets
Comments Locked

120 Comments

View All Comments

  • JlHADJOE - Thursday, February 5, 2015 - link

    Will be interesting to see another article like this when we have CPUs with integrated graphics and DDR4.
  • OrphanageExplosion - Thursday, February 5, 2015 - link

    "For any user interested in performance, memory speed is an important part of the equation when it comes to building your next system."

    Doesn't your article actually disprove your initial statement?

    And surely your gaming benchmarks might make more sense if - once again - you actually tested CPU intensive titles as opposed to the titles you've tested? The GPU will barely touch your expensive DDR4, if at all.

    The only scenario I can see DDR4 making a real difference will be in graphics work with AMD APUs, and even then we'll need to see really high-end, fast kits that should just about offer comparable bandwidth with the slowest GDDR5 to offer a literally game-changing improvement.
  • Sushisamurai - Thursday, February 5, 2015 - link

    Errr... Memory speed did make a difference (small IMO) when it came to DDR3. This article tests if it holds true to DDR4 - however, without an iGPU the other tests don't really show a significant difference when price is factored in. I mean, sure, there's a difference, but not worth the price premium IMO.

    A future AMD comparison would be nice, when AMD decides to support DDR4... Otherwise, it was a nice article.
  • FlushedBubblyJock - Sunday, February 15, 2015 - link

    That's called the "justify wasting my life to write this article, tag and hook and sinker line, plus the required tokus kissing to the kind manu's that handed over their top tier for some "free" advertising and getting out the word.

    It's not like the poor bleary eyed tester can say: " I didn't want to do this because one percent difference is just not worth it, my name is not K1ngP1n and I'm not getting 77 free personal jet flights this year to go screw around in nations all over the world.
  • vgobbo - Thursday, February 5, 2015 - link

    I really enjoyed this review!

    But... Intel processors are massive cache beasts, which reduces a lot the pressure put on memory (except for games, which I believe was the most interesting part of this review). Said that, I wish to see a review on an AMD system, which have a lot weaker cache structure and memory buses.

    Is this possible to happen, or I'm just a dreamer? ;D

    Anyway, this was another outstanding review of Anandtech! Loved it! Thank u guys!
  • dazelord - Thursday, February 5, 2015 - link

    Interesting, but isn't Haswell-E/X99 accessing the memory in 256bit mode using 4 dimms? I suspect the gains would be much more substantial in 128bit/ 2 dimm systems.
  • willis936 - Thursday, February 5, 2015 - link

    Good stuff but after seeing a fair bit of memory roundups in my time I think this mostly confirms what everyone has been thinking: DDR4 is incredibly underwhelming in the performance space. You not only get better bang for buck with DDR3 right now but comparable, if not better, performance in the high end kits.
  • galta - Thursday, February 5, 2015 - link

    You've got it wrong. Nobody goes for DDR4 because of the memory, it's because of the new CPU and chipset.
    Ask yourself: do you really need extra cores and/or pci lanes? Or, do you want them and have the money to pay for it? If the answer is "yes" than you'll go for 5xxx and DDR4 is incidental.
    Otherwise, go 4xxx and DDR3 will also be incidental.
    It makes no sense to talk about memory as if it could be chosen independently from CPU/chipset.
  • rmh26 - Thursday, February 5, 2015 - link

    Ian could you post more information about the NPB fluid dynamics benchmark. Specifically which benchmark CG, EP, FT ... and which class problem S, W, A, ...etc. In my own research I have found the simulation time to scale nearly linearly with the memory frequency for large enough problems. I am wondering how much the cache has to do with masking the effects of memory frequency on performance. As a the size of the problem gets larger the cache will no longer be able to mask the slowness of the memory. In general memory, and moreover interconnects between computers play a very important role in some HPC applications the rely on solving partial differential equations. In fact there have been suggestions to move away from the standard HPC Linpack benchmark used to create the top 500 lists as this compute intensive benchmark does not accurately reflect the load placed on supercomputers.

    http://insidehpc.com/2013/07/replacing-linpack-jac...
  • Dasa2 - Thursday, February 5, 2015 - link

    Congrats anandtech you screwed up another ram review further misleading people

    The games you chose to review are so badly GPU bottlenecked its sad. Do you not know that ram performance affects cpu performance?

    You could run Dirt 3 with a i3 2100 vs a 5ghz 5960x and get the same score
    How about putting some different CPU in amongst your ram benchmarks like 4460-4690 5820-5960x so people can see how faster ram compares to spending more on the CPU...

    A 4690k with 1600c11 ram can perform slower in games than a 2500k with 2133c9 ram

Log in

Don't have an account? Sign up now