21.5-inch iMac (Late 2013) Review: Iris Pro Driving an Accurate Display
by Anand Lal Shimpi on October 7, 2013 3:28 AM ESTI have a confession to make. For the past year I’ve been using a 27-inch iMac as my primary workstation. I always said that if I had a less mobile lifestyle the iMac is probably the machine I’d end up with (that was prior to the announcement of the new Mac Pro of course). This past year has been the most insane in terms of travel, so it wasn’t a lack of mobility that kept me on the iMac but rather a desire to test Apple’s new Fusion Drive over the long haul.
It’s entirely possible to mask the overwhelmingly bad experience of a hard drive in a high performance machine by only sampling at the beginning of the journey. When the OS is a clean install, the drive is mostly empty and thus operating at its peak performance. Obviously Apple’s Fusion Drive is designed to mitigate the inevitable performance degradation, and my initial take on it after about a month of use was very good - but would it last?
I’m happy to report that it actually did. So today’s confession is really a two-parter: I’ve been using an iMac for the past year, and I’ve been using a hard drive as a part of my primary storage for the past year. Yeesh, I never thought I’d do either of those things.
Apple 2013 iMac | |||||||||
Configuration | 21.5-inch iMac | 21.5-inch Upgraded iMac | 27-inch iMac | 27-inch Upgraded iMac | |||||
Display | 21.5-inch 1920 x 1080 | 21.5-inch 1920 x 1080 | 27-inch 2560 x 1440 | 27-inch 2560 x 1440 | |||||
CPU (Base/Turbo) | Intel Core i5-4570R (2.7GHz/3.2GHz) | Intel Core i5-4570S (2.9GHz/3.6GHz) | Intel Core i5-4570 (3.2GHz/3.6GHz) | Intel Core i5-4670 (3.4/3.8GHz) | |||||
GPU | Intel Iris Pro 5200 | NVIDIA GeForce GT 750M (1GB GDDR5) | NVIDIA GeForce GT 755M (1GB GDDR5) | NVIDIA GeForce GTX 775M (2GB GDDR5) | |||||
RAM | 8GB DDR3-1600 | 8GB DDR3-1600 | 8GB DDR3-1600 |
8GB DDR3-1600 |
|||||
Storage | 1TB 5400RPM | 1TB 5400RPM | 1TB 7200RPM | 1TB 7200RPM | |||||
WiFi | 802.11ac | ||||||||
I/O | 4 x USB 3.0, 2 x Thunderbolt, 1 x GigE, SDXC reader, headphone jack | ||||||||
Starting Price | $1299 | $1499 | $1799 | $1999 |
This year the iMacs get incrementally better. Displays and resolutions are the same, but silicon options are a bit quicker, 802.11ac is on deck and the SSDs all move to PCIe (including Fusion Drive). As tempted as I was to begin my first look at the 2013 iMac evaluating the impact of going to faster storage, it was the entry-level model that grabbed my attention first because of a little piece of silicon we’ve come to know as Crystalwell.
The CPU: Haswell with an Optional Crystalwell
The entry level 21.5-inch iMac is one of the most affordable options in Apple’s lineup. At $1499 Apple will typically sell you a dual-core notebook of some sort, but here you get no less than a quad-core, 65W Core i5-4570R. That’s four cores running at 2.7GHz, and capable of hitting up to 3.2GHz. In practice I pretty much always saw the cores running at 3.0GHz regardless of workload. I’d see some excursions up at 3.1GHz but for the most part you’re effectively buying a 3GHz Haswell system.
The R at the end of the SKU connotes something very special. Not only do you get Intel’s fastest GPU configuration (40 EUs running at up to 1.15GHz), but you also get 128MB of on-package eDRAM. The combination of the two gives you a new brand: Intel’s Iris Pro 5200.
The Iris Pro 5200 is a GPU configuration option I expect to see on the 15-inch MacBook Pro with Retina Display, and its presence on the iMac tells us how it’ll be done. In last year’s iMacs, Apple picked from a selection of NVIDIA discrete GPUs. This year, the entry level 21.5-inch model gets Iris Pro 5200 while the rest feature updated NVIDIA Kepler discrete GPUs. It’s the same bifurcation that I expect to find on the 15-inch MacBook Pro with Retina Display. As we found in our preview of Intel’s Iris Pro 5200, in its fastest implementation the GPU isn’t enough to outperform NVIDIA’s GeForce GT 650M (found in the 2012 15-inch rMBP). Apple’s engineers aren’t particularly fond of performance regressions, so the NVIDIA GPUs stick around for those who need them, and for the first time we get a truly decent integrated option from Intel.
Most PC OEMs appear to have gone the opposite route - choosing NVIDIA’s low-end discrete graphics over Intel’s Iris Pro. The two end up being fairly similar in cost (with Intel getting the slight edge it seems). With NVIDIA you can get better performance, while Intel should deliver somewhat lower power consumption and an obvious reduction in board area. I suspect Iris Pro probably came in a bit slower than even Apple expected, but given that Apple asked Intel to build the thing it probably felt a bit compelled to use it somewhere. Plus there’s the whole believing in the strategy aspect of all of this. If Apple could shift most of its designs exclusively to processor graphics, it would actually be able to realize board and power savings which would have an impact on industrial design. We’re just not there yet. Whether we ever get there depends on just how aggressive Intel is on the graphics front.
I already went through what the 128MB eDRAM (codename Crystalwell) gets you, but in short that massive on-package memory acts as a L4 cache for both the CPU and GPU. You get 50GB/s of bandwidth in both directions, and access latency somewhere between L3 cache and main memory requests.
OS X doesn’t seem to acknowledge Crystalwell’s presence, but it’s definitely there and operational (you can tell by looking at the GPU performance results). Some very specific workloads can benefit handsomely from the large eDRAM. At boot I suspect key parts of the OS are probably cached on-package as well, something that’ll have big implications for power usage in mobile. Unfortunately my review sample came with a hard drive, and these new iMacs aren’t super easy to break into (not to mention that Apple frowns upon that sort of behavior with their review samples), which hampered the user experience. OS X continues to do a good job of keeping things cached in memory, and the iMac’s 8GB default configuration helps there tremendously. Whenever I was working with data and apps in memory, the system felt quite snappy. I’ll get to the benchmarks in a moment.
The non-gaming experience with Iris Pro under OS X seemed fine. I noticed a graphical glitch under Safari in 10.8.5 (I saw tearing while scrolling down a long list of iCloud tabs) but otherwise everything else looked good.
iMac (Late 2013) CPU Options | ||||||
21.5-inch | 27-inch | |||||
Base | Upgraded | Optional | Base | Upgraded | Optional | |
Intel CPU | i5-4570R | i5-4570S | i7-4770S | i5-4570 | i5-4670 | i7-4771 |
Cores / Threads | 4 / 4 | 4 / 4 | 4 / 8 | 4 / 4 | 4 / 4 | 4 / 8 |
Base Clock | 2.7GHz | 2.9GHz | 3.1GHz | 3.2GHz | 3.4GHz | 3.5GHz |
Max Turbo | 3.2GHz | 3.6GHz | 3.9GHz | 3.6GHz | 3.8GHz | 3.9GHz |
L3 Cache | 4MB | 6MB | 8MB | 6MB | 6MB | 8MB |
TDP | 65W | 65W | 65W | 65W | 84W | 84W |
VT-x / VT-d | Y / Y | Y / Y | Y / Y | Y / Y | Y / Y | Y / Y |
TSX-NI | N | Y | Y | Y | Y | Y |
In typical Intel fashion, you get nothing for free. The 128MB of eDRAM comes at the expense of a smaller L3 cache, in this case 4MB shared by all four cores (and the GPU). Note that this tradeoff also exists on the higher end Core i7 R-series SKU, but 6MB of L3 is somehow less bothersome than 4MB. This is the lowest core:L3 cache ratio of any modern Intel Core series processor. The 128MB eDRAM likely more than makes up for this reduction, and I do wonder if this isn’t a sign of things to come from Intel. A shift towards smaller, even lower latency L3 caches might make sense if you’ve got a massive eDRAM array backing it all up.
127 Comments
View All Comments
rootheday3 - Monday, October 7, 2013 - link
I don't think this is true. See the die shots here:http://wccftech.com/haswell-die-configurations-int...
I count 8 different die configurations.
Note that the reduction in LLC (CPU L3) on Iris Pro may be because some of the LLC is used to hold tag data for the 128MB of eDRAM. Mainstream Intel CPUs have 2MB of LLC per CPU core, so the die has 8MB of LLC natively. The i7-4770R has all 8MB enabled but 2MB for eDRAM tag ram leaving 6MB for the CPU/GPU to use directly as cache (how it is reported on the spec sheet). The i5s generally have 6MB natively (for either die recovery and/or segmentation reasons) but if 2MB is used for eLLC tag ram, that leaves 4 for direct cache usage.
Given that you get 128MB of eDRAM in exchange for the 2MB LLC consumed as tag ram, seems like a fair trade.
name99 - Monday, October 7, 2013 - link
HT adds a pretty consistent 25% performance boost across an extremely wide variety of benchmarks. 50% is an unrealistic value.And, for the love of god, please stop with this faux-naive "I do not understand why Intel does ..." crap.
If you do understand the reason, you are wasting everyone's time with your lament.
If you don't understand the reason, go read a fscking book. Price discrimination (and the consequences thereof INCLUDING lower prices at the low end) are hardly deep secret mysteries.
(And the same holds for the "Why oh why do Apple charge so much for RAM upgrades or flash upgrades" crowd. You're welcome to say that you do not believe the extra cost is worth the extra value to YOU --- but don't pretend there's some deep unresolved mystery here that only you have the wit to notice and bring to our attention; AND on't pretend that your particular cost/benefit tradeoff represents the entire world.
And heck, let's be equal opportunity here --- the Windows crowd have their own version of this particular fool, telling us how unfair it is that Windows Super Premium Plus Live Home edition is priced at $30 more than Windows Ultra Extra Pro Family edition.
I imagine there are the equivalent versions of these people complaining about how unfair Amazon S3 pricing is, or the cost of extra Google storage. Always with this same "I do not understand why these companies behave exactly like economic theory predicts; and they try to make a profit in the bargain" idiocy.)
tipoo - Monday, October 7, 2013 - link
Wow, the gaming performance gap between OSX and Windows hasn't narrowed at all. I had hoped, two major OS releases after the Snow Leopard article, it would have gotten better.tipoo - Monday, October 7, 2013 - link
I wonder if AMD will support OSX with Mantle?Flunk - Monday, October 7, 2013 - link
Likely not, I don't think they're shipping GCN chips in any Apple products right now.AlValentyn - Monday, October 7, 2013 - link
Look up Mavericks, it supports OpenGL4.1, while Mountain Lion is still at 3.2http://t.co/rzARF6vIbm
Good overall improvements in the Developer Previews alone.
tipoo - Monday, October 7, 2013 - link
ML supports a higher OpenGL spec than Snow Leopard, but that doesn't seem to have helped lessen the real world performance gap.Sm0kes - Tuesday, October 8, 2013 - link
Got a link with real numbers?Hrel - Monday, October 7, 2013 - link
The charts show the Iris Pro take a pretty hefty hit any time you increase quality settings. HOWEVER, you're also increasing resolution. I'd be interested to see what happens when you increase resolution but leave detail settings at low-med.In other words, is the bottleneck the processing power of the GPU (I think it is) or the memory bandwidth? I suspect we could run Mass Effect or something similar at 1080p with medium settings.
Kevin G - Monday, October 7, 2013 - link
"OS X doesn’t seem to acknowledge Crystalwell’s presence, but it’s definitely there and operational (you can tell by looking at the GPU performance results)."I bet OS X does but not in the GUI. Type the following in terminal:
sysctl -a hw.
There should be line about the CPU's full cache hierarchy among other cache information.