CPU Performance

I ran the entry level iMac through our normal OS X CPU test suite. I don't have a ton of Mac desktops in the database but I do have results for last year's 27-inch iMac that'll help put things in perspective. Also keep in mind that the 21.5-inch iMac came equipped with a HDD, while nearly everything else I'm comparing it to has an SSD inside.

Cinebench R11.5

Single threaded performance is about on par with an upgraded 13-inch Haswell MacBook Air, which is sort of insane when you think about it. The Core i7 upgrade in the 13-inch MBA can turbo up to 3.3GHz, compared to 3.2GHz with the entry-level iMac’s Core i5. The amount of L3 cache dedicated to a single core is actually the same between both parts (at 4MB). In the case of Cinebench, the 128MB L4 cache doesn’t seem to do much.

Cinebench R11.5

Multithreaded performance is obviously much better than what you’d get from a MacBook Air. You’ll notice the entry-level iMac’s performance here is actually quite similar to that of my old 2011 15-inch MacBook Pro. Although the Core i5-4570R has higher IPC and more TDP to work with, since it’s a desktop Core i5 it doesn’t support Hyper Threading and thus is only a 4 core/4 thread part. The Core i7 in my old MBP however is a 4 core/8 thread part, letting it make better use of each core’s execution resources in heavily threaded applications. This is really no fault of Apple’s, but rather a frustrating side effect of Intel’s SKU segmentation strategy.

iMovie '11 (Import + Optimize)

iMovie '11 (Export)

Looking at our iMovie test we see another 50% advantage comparing last year’s highest end 27-inch iMac configuration to the entry-level 21.5-inch model. The explanation boils down to lower max turbo frequencies and fewer number of simultaneous threads supported. There’s also the fact that I’m testing a HDD equipped system and comparing it to those with SSDs, but most of my OS X CPU test suite ends up being largely CPU bound with minimal impact from IO performance.

iPhoto 12MP RAW Import

iPhoto import performance runs pretty much in line with what we’ve seen thus far. The entry-level iMac is a good performer, but power users will definitely want to push for a faster CPU.

Adobe Lightroom 3 - Export Preset

Our Lightroom export test is perhaps the most interesting here. The gap between last year’s 3.4GHz Core i7 and the Crystalwell equipped Core i5-4570R is only 12%. My first thought was to attribute the difference to Crystalwell, but if we look at the gap vs. the 1.7GHz 2013 MacBook Air the iMac’s advantage isn’t really any different than under our iPhoto test. Instead what I believe we’re seeing here is yet another benchmark where Haswell’s architectural advantages shine.

Adobe Photoshop CS5 Performance

Performance in our Photoshop test is similarly good, with the entry-level iMac coming relatively close (within 20%) to the performance of a high-end 2012 27-inch iMac.

Final Cut Pro X - Import

There aren’t any surprises in our FCP-X test either.

Xcode - Build FireFox

I'm slowly amassing results in our Xcode test. What's interesting about the 21.5-inch iMac's performance here is just how inconsistent it was due to the HDD. Subsequent runs either gave me similar performance to what I'm reporting here, or much, much higher build times. If you needed a reason to opt for an SSD, this is a great one. Even looking at the best performance the iMac can deliver, you can see it's not tremendously quicker than the MacBook Air. With an SSD I'd expect to see far better numbers here.

Introduction & The CPU GPU Performance: Iris Pro in the Wild
Comments Locked


View All Comments

  • rootheday3 - Monday, October 7, 2013 - link

    I don't think this is true. See the die shots here:

    I count 8 different die configurations.

    Note that the reduction in LLC (CPU L3) on Iris Pro may be because some of the LLC is used to hold tag data for the 128MB of eDRAM. Mainstream Intel CPUs have 2MB of LLC per CPU core, so the die has 8MB of LLC natively. The i7-4770R has all 8MB enabled but 2MB for eDRAM tag ram leaving 6MB for the CPU/GPU to use directly as cache (how it is reported on the spec sheet). The i5s generally have 6MB natively (for either die recovery and/or segmentation reasons) but if 2MB is used for eLLC tag ram, that leaves 4 for direct cache usage.

    Given that you get 128MB of eDRAM in exchange for the 2MB LLC consumed as tag ram, seems like a fair trade.
  • name99 - Monday, October 7, 2013 - link

    HT adds a pretty consistent 25% performance boost across an extremely wide variety of benchmarks. 50% is an unrealistic value.

    And, for the love of god, please stop with this faux-naive "I do not understand why Intel does ..." crap.
    If you do understand the reason, you are wasting everyone's time with your lament.
    If you don't understand the reason, go read a fscking book. Price discrimination (and the consequences thereof INCLUDING lower prices at the low end) are hardly deep secret mysteries.

    (And the same holds for the "Why oh why do Apple charge so much for RAM upgrades or flash upgrades" crowd. You're welcome to say that you do not believe the extra cost is worth the extra value to YOU --- but don't pretend there's some deep unresolved mystery here that only you have the wit to notice and bring to our attention; AND on't pretend that your particular cost/benefit tradeoff represents the entire world.

    And heck, let's be equal opportunity here --- the Windows crowd have their own version of this particular fool, telling us how unfair it is that Windows Super Premium Plus Live Home edition is priced at $30 more than Windows Ultra Extra Pro Family edition.

    I imagine there are the equivalent versions of these people complaining about how unfair Amazon S3 pricing is, or the cost of extra Google storage. Always with this same "I do not understand why these companies behave exactly like economic theory predicts; and they try to make a profit in the bargain" idiocy.)
  • tipoo - Monday, October 7, 2013 - link

    Wow, the gaming performance gap between OSX and Windows hasn't narrowed at all. I had hoped, two major OS releases after the Snow Leopard article, it would have gotten better.
  • tipoo - Monday, October 7, 2013 - link

    I wonder if AMD will support OSX with Mantle?
  • Flunk - Monday, October 7, 2013 - link

    Likely not, I don't think they're shipping GCN chips in any Apple products right now.
  • AlValentyn - Monday, October 7, 2013 - link

    Look up Mavericks, it supports OpenGL4.1, while Mountain Lion is still at 3.2


    Good overall improvements in the Developer Previews alone.
  • tipoo - Monday, October 7, 2013 - link

    ML supports a higher OpenGL spec than Snow Leopard, but that doesn't seem to have helped lessen the real world performance gap.
  • Sm0kes - Tuesday, October 8, 2013 - link

    Got a link with real numbers?
  • Hrel - Monday, October 7, 2013 - link

    The charts show the Iris Pro take a pretty hefty hit any time you increase quality settings. HOWEVER, you're also increasing resolution. I'd be interested to see what happens when you increase resolution but leave detail settings at low-med.

    In other words, is the bottleneck the processing power of the GPU (I think it is) or the memory bandwidth? I suspect we could run Mass Effect or something similar at 1080p with medium settings.
  • Kevin G - Monday, October 7, 2013 - link

    "OS X doesn’t seem to acknowledge Crystalwell’s presence, but it’s definitely there and operational (you can tell by looking at the GPU performance results)."

    I bet OS X does but not in the GUI. Type the following in terminal:

    sysctl -a hw.

    There should be line about the CPU's full cache hierarchy among other cache information.

Log in

Don't have an account? Sign up now