Intel Iris Pro 5200 Graphics Review: Core i7-4950HQ Tested
by Anand Lal Shimpi on June 1, 2013 10:01 AM EST
The Prelude
As Intel got into the chipset business it quickly found itself faced with an interesting problem. As the number of supported IO interfaces increased (back then we were talking about things like AGP, FSB), the size of the North Bridge die had to increase in order to accommodate all of the external facing IO. Eventually Intel ended up in a situation where IO dictated a minimum die area for the chipset, but the actual controllers driving that IO didn’t need all of that die area. Intel effectively had some free space on its North Bridge die to do whatever it wanted with. In the late 90s Micron saw this problem and contemplating throwing some L3 cache onto its North Bridges. Intel’s solution was to give graphics away for free.
The budget for Intel graphics was always whatever free space remained once all other necessary controllers in the North Bridge were accounted for. As a result, Intel’s integrated graphics was never particularly good. Intel didn’t care about graphics, it just had some free space on a necessary piece of silicon and decided to do something with it. High performance GPUs need lots of transistors, something Intel would never give its graphics architects - they only got the bare minimum. It also didn’t make sense to focus on things like driver optimizations and image quality. Investing in people and infrastructure to support something you’re giving away for free never made a lot of sense.
Intel hired some very passionate graphics engineers, who always petitioned Intel management to give them more die area to work with, but the answer always came back no. Intel was a pure blooded CPU company, and the GPU industry wasn’t interesting enough at the time. Intel’s GPU leadership needed another approach.
A few years ago they got that break. Once again, it had to do with IO demands on chipset die area. Intel’s chipsets were always built on a n-1 or n-2 process. If Intel was building a 45nm CPU, the chipset would be built on 65nm or 90nm. This waterfall effect allowed Intel to help get more mileage out of its older fabs, which made the accountants at Intel quite happy as those $2 - $3B buildings are painfully useless once obsolete. As the PC industry grew, so did shipments of Intel chipsets. Each Intel CPU sold needed at least one other Intel chip built on a previous generation node. Interface widths as well as the number of IOs required on chipsets continued to increase, driving chipset die areas up once again. This time however, the problem wasn’t as easy to deal with as giving the graphics guys more die area to work with. Looking at demand for Intel chipsets, and the increasing die area, it became clear that one of two things had to happen: Intel would either have to build more fabs on older process nodes to keep up with demand, or Intel would have to integrate parts of the chipset into the CPU.
Not wanting to invest in older fab technology, Intel management green-lit the second option: to move the Graphics and Memory Controller Hub onto the CPU die. All that would remain off-die would be a lightweight IO controller for things like SATA and USB. PCIe, the memory controller, and graphics would all move onto the CPU package, and then eventually share the same die with the CPU cores.
Pure economics and an unwillingness to invest in older fabs made the GPU a first class citizen in Intel silicon terms, but Intel management still didn’t have the motivation to dedicate more die area to the GPU. That encouragement would come externally, from Apple.
Looking at the past few years of Apple products, you’ll recognize one common thread: Apple as a company values GPU performance. As a small customer of Intel’s, Apple’s GPU desires didn’t really matter, but as Apple grew, so did its influence within Intel. With every microprocessor generation, Intel talks to its major customers and uses their input to help shape the designs. There’s no sense in building silicon that no one wants to buy, so Intel engages its customers and rolls their feedback into silicon. Apple eventually got to the point where it was buying enough high-margin Intel silicon to influence Intel’s roadmap. That’s how we got Intel’s HD 3000. And that’s how we got here.
177 Comments
View All Comments
HisDivineOrder - Saturday, June 1, 2013 - link
I see Razer making an Edge tablet with an Iris-based chip. In fact, it seems built for that idea more than anything else. That or a NUC HTPC run at 720p with no AA ever. You've got superior performance to any console out there right now and it's in a size smaller than an AppleTV.So yeah, the next Razer Edge should include this as an optional way to lower the cost of the whole system. I also think the next Surface Pro should use this. So high end x86-based laptops with Windows 8 Pro.
And NUC/BRIX systems that are so small they don't have room for discrete GPU's.
I imagine some thinner than makes sense ultrathins could also use this to great effect.
All that said, most systems people will be able to afford and use on a regular basis won't be using this chip. I think that's sad, but it's the way it will be until Intel stops trying to use Iris as a bonus for the high end users instead of trying to put discrete GPU's out of business by putting these on every chip they make so people start seeing it CAN do a decent job on its own within its specific limitations.
Right now, no one's going to see that, except those few fringe cases. Strictly speaking, while it might not have matched the 650m (or its successor), it did a decent job with the 640m and that's a lot better than any other IGP by Intel.
Spunjji - Tuesday, June 4, 2013 - link
You confused me here on these points:1) The NUC uses a 17W TDP chip and overheats. We're not going to have Iris in that form factor yet.
2) It would increase the cost of the Edge, not lower it. Same TDP problem too.
Otherwise I agree, this really needs to roll down lower in the food chain to have a serious impact. Hopefully they'll do that with Broadwell used by the GPU when the die area effectively becomes free thanks to the process switch.
whyso - Saturday, June 1, 2013 - link
So intel was right. Iris Pro pretty much matches a 650m at playable settings (30 fps +). Note that anandtech is being full of BullS**t here and comparing it to an OVERCLOCKED 650m from apple. Lets see, when intel made that 'equal to a 650m' claim it was talking about a standard 650m not an overclocked 650m running at 900/2500 (GDDR5) vs the normal 835/1000 (GDDR5 + boost at full, no boost = 735 mhz core). If you look at a standard clocked GDDR3 variant iris pro 5200 and the 650m are pretty much very similar (depending on the games) within around 10%. New Intel drivers should further shorten the gap (given that intel is quite good in compute).JarredWalton - Sunday, June 2, 2013 - link
http://www.anandtech.com/bench/Product/814For the games I tested, the rMBP15 isnt' that much faster in many titles. Iris isn't quite able to match GT 650M, but it's pretty close all things considered.
Spunjji - Tuesday, June 4, 2013 - link
I will believe this about new Intel drivers when I see them. I seriously, genuinely hope they surprise me, though.dbcoopernz - Saturday, June 1, 2013 - link
Are you going to test this system with madVR?Ryan Smith - Sunday, June 2, 2013 - link
We have Ganesh working to answer that question right now.dbcoopernz - Sunday, June 2, 2013 - link
Cool. :)JDG1980 - Saturday, June 1, 2013 - link
I would have liked to see some madVR tests. It seems to me that the particular architecture of this chip - lots of computing power, somewhat less memory bandwidth - would be very well suited to madVR's better processing options. It's been established that difficult features like Jinc scaling (the best quality) are limited by shader performance, not bandwidth.The price is far steeper than I would have expected, but once it inevitably drops a bit, I could see mini-ITX boards with this become a viable solution for high-end, passively-cooled HTPCs.
By the way, did they ever fix the 23.976 fps error that has been there since Clarkdale?
dbcoopernz - Saturday, June 1, 2013 - link
Missing Remote reports that 23.976 timing is much better.http://www.missingremote.com/review/intel-core-i7-...