Synthetic testing has a way of elevating what may be a minor difference between hardware into a larger-than-life comparison, despite the effect on the usage of the system being near minimal.  There are several benchmarks which straddle the line between synthetic and real world (such as Cinebench and SPECviewperf) which we include here.

SPECviewperf

The mix of real-world and synthetic benchmarks does not get more complex than SPECviewperf – a benchmarking tool designed to test various capabilities in several modern 3D renders.  Each of these rendering programs come with their own coding practices, and as such can either be memory bound, CPU bound or GPU bound.  In our testing, we use the standard benchmark and report the results for comparison.

In most circumstances the 2x8 GB 2400 C11 kit performs near or better than the 2133 C9 kit (Lightwave, Maya, tcvis).  For a lot of these results, MHz is more important than command rate, and in terms of catia, having denser modules also seems to help.

Cinebench x64

A long time favourite of synthetic benchmarkers the world over is the use of Cinebench, software designed to test the real-world application of rendering software via the CPU or GPU.  In this circumstance we test the CPU single core and multi-core performance, as well as the GPU performance using a single GTX 580 at x16 PCIe 2.0 bandwidth.  Any serial factors have to be processed through the CPU, and as such any memory access will either slow or speed up the benchmark.

Cinebench - CPU

Cinebench - OpenGL

Not much differentiates any of the memory kits - the main difference is using a GTX 580 for the OpenGL test interestingly enough.

Conversion, Compression and Computation Overclocking Results
Comments Locked

30 Comments

View All Comments

  • Beenthere - Wednesday, October 24, 2012 - link

    Can't change the type from 8166 MHz. to the proper 1866 MHz. but most folks should be able to figure it out...
  • silverblue - Thursday, October 25, 2012 - link

    Of course, if you have an APU-based system, the faster memory does indeed make a difference... though I agree, it's the exception rather than the norm.
  • JlHADJOE - Thursday, October 25, 2012 - link

    But then its totally contrary to one of the main reasons behind having an APU -- penny pinching.

    These kits cost twice the DDR3-1333 going rate, so that's $75 you could have put into a GPU. Can't speak for everyone, but I'd probably choose an i3 with DDR3-1333 + a 7750 over an A10-5800k with DDR3-2400.
  • JohnMD1022 - Wednesday, October 24, 2012 - link

    My thoughts exactly.

    1600 seems to be the sweet spot on price and performance.
  • PseudoKnight - Wednesday, October 24, 2012 - link

    Anandtech did a series of memory frequency tests like a year ago (I forget exactly). While they found that 1333 to 1600 didn't offer much in terms of average FPS gains in gaming, it had a clearer impact on minimum frame rates. I'm not saying it's worth it either way here, but I'd like people to give some attention to minimum frame rates when talking about the benefits of bumps in memory frequency.

    That said, 2400 is obviously overkill here, but that should be obvious to anyone who wants to spend their money efficiently.
  • Impulses - Thursday, October 25, 2012 - link

    The article the did a year ago (with Sandy Bridge in mind) says absolutely nothing about minimum frame rates vs average... I don't even see how faster memory could have such an effect with a dedicated GPU.
  • Impulses - Thursday, October 25, 2012 - link

    *they
  • JlHADJOE - Thursday, October 25, 2012 - link

    It might have been techreport. They're the guys who usually do those frame-time measurements.
  • poohbear - Thursday, October 25, 2012 - link

    pseudoking what are u talking about? there is virtually NO effect on minimum frames on a dedicated GPU system. Ever since the memory controller moved to the CPU, the RAM timings have become ALOT a less important component in the system. The only way it shows a difference is when you go to all kinds of outlandish scenerios that involve isolating the GPU and CPU to situations that show some difference between RAM, but in a real world setting those situations are so rare that it becomes pointless to even entertain them.
  • Ratman6161 - Thursday, October 25, 2012 - link

    But add running virtual machines to your list of reasons why a lot of memory might be good. When working from home I've actually typically got the host machine where I'm doing most of my actual work plus at least two virtual machines running, each VPN'ed into a different remote network. So it isn't too uncommon for me to see about 90% of my 16 gb in use at any one time. And I do occasionally hit times when I have to shut down one VM in order to start another. So I wouldn't actually mind having 32 GB.

    On the other hand, while I need a large quantity of RAM, my 1600 MHz G-Skill works just fine performance wise so I don't need speed - I need quantity.

Log in

Don't have an account? Sign up now