When it comes to memory overclocking, there are several ways to approach the issue.  Typically memory overclocking is rarely required - only those attempting to run benchmarks need worry about pushing the memory to its uppermost limits.  It also depends highly on the memory kits being used - memory is similar to processors in the fact that the ICs are binned to a rated speed.  The higher the bin, the better the speed - however if there is a demand for lower speed memory, then the higher bin parts may be declocked to increase supply of the lower clocked component.  Similarly, for the high end frequency kits, less than 1% of all ICs tested may actually hit the speed of the kit, hence the price for these kits increase exponentially.

With this in mind, there are several ways a user can approach overclocking memory.  The art of overclocking memory can be as complex or as simple as the user would like - typically the dark side of memory overclocking requires deep in-depth knowledge of how memory works at a fundamental level.  For the purposes of this review, we are taking overclocking in three different scenarios:

a) From XMP, adjust Command Rate from 2T to 1T
b) From XMP, increase Memory Speed strap (e.g. 1333 MHz -> 1400 -> 1600)
c) From XMP, decrease main sub-timings (e.g. 10-12-12 to 9-11-11 to 8-10-10)

There is plenty of scope to overclock beyond this, such as adjusting voltages or the voltage of the memory controller.  As long as a user is confident with adjusting these settings, then there is a good chance that the results here will be surpassed.   There is also the fact that individual sticks of memory may perform better than the rest of the kit, or that one of the modules could be a complete dud and hold the rest of the kit back.  For the purpose of this review we are seeing if the memory out of the box, and the performance of the kit as a whole, will work faster at the rated voltage.

In order to ensure that the kit is stable at the new speed, we run the Linpack test within OCCT for five minutes.  This is a small but thorough test, and we understand that users may wish to stability test for longer to reassure themselves of a longer element of stability.  However for the purposes of throughput, a five minute test will catch immediate errors from the overclocking of the memory.

With this in mind, the kit performed as follows:

GEW316GB2400C11ADC – 2x8 GB rated at DDR3-2400 11-12-12-30 2T 1.65 volts

Adjusting from 2T to 1T: Passes Linpack
Adjusting from 2400 to 2600: Passes Linpack
Adjusting from 2600 to 2666: Fails Linpack
Adjusting from 11-12-12 to 10-11-11: Passes Linpack
Adjusting from 10-11-11 to 9-10-10: Fails Linpack

Rendering Conclusions
Comments Locked

30 Comments

View All Comments

  • Beenthere - Wednesday, October 24, 2012 - link

    Can't change the type from 8166 MHz. to the proper 1866 MHz. but most folks should be able to figure it out...
  • silverblue - Thursday, October 25, 2012 - link

    Of course, if you have an APU-based system, the faster memory does indeed make a difference... though I agree, it's the exception rather than the norm.
  • JlHADJOE - Thursday, October 25, 2012 - link

    But then its totally contrary to one of the main reasons behind having an APU -- penny pinching.

    These kits cost twice the DDR3-1333 going rate, so that's $75 you could have put into a GPU. Can't speak for everyone, but I'd probably choose an i3 with DDR3-1333 + a 7750 over an A10-5800k with DDR3-2400.
  • JohnMD1022 - Wednesday, October 24, 2012 - link

    My thoughts exactly.

    1600 seems to be the sweet spot on price and performance.
  • PseudoKnight - Wednesday, October 24, 2012 - link

    Anandtech did a series of memory frequency tests like a year ago (I forget exactly). While they found that 1333 to 1600 didn't offer much in terms of average FPS gains in gaming, it had a clearer impact on minimum frame rates. I'm not saying it's worth it either way here, but I'd like people to give some attention to minimum frame rates when talking about the benefits of bumps in memory frequency.

    That said, 2400 is obviously overkill here, but that should be obvious to anyone who wants to spend their money efficiently.
  • Impulses - Thursday, October 25, 2012 - link

    The article the did a year ago (with Sandy Bridge in mind) says absolutely nothing about minimum frame rates vs average... I don't even see how faster memory could have such an effect with a dedicated GPU.
  • Impulses - Thursday, October 25, 2012 - link

    *they
  • JlHADJOE - Thursday, October 25, 2012 - link

    It might have been techreport. They're the guys who usually do those frame-time measurements.
  • poohbear - Thursday, October 25, 2012 - link

    pseudoking what are u talking about? there is virtually NO effect on minimum frames on a dedicated GPU system. Ever since the memory controller moved to the CPU, the RAM timings have become ALOT a less important component in the system. The only way it shows a difference is when you go to all kinds of outlandish scenerios that involve isolating the GPU and CPU to situations that show some difference between RAM, but in a real world setting those situations are so rare that it becomes pointless to even entertain them.
  • Ratman6161 - Thursday, October 25, 2012 - link

    But add running virtual machines to your list of reasons why a lot of memory might be good. When working from home I've actually typically got the host machine where I'm doing most of my actual work plus at least two virtual machines running, each VPN'ed into a different remote network. So it isn't too uncommon for me to see about 90% of my 16 gb in use at any one time. And I do occasionally hit times when I have to shut down one VM in order to start another. So I wouldn't actually mind having 32 GB.

    On the other hand, while I need a large quantity of RAM, my 1600 MHz G-Skill works just fine performance wise so I don't need speed - I need quantity.

Log in

Don't have an account? Sign up now