Saves some money/space, simplifies cooling (you can do permanently installed heatsinks and such), increases reliability, and because there's not much demand for socketed processors. Very few people ever upgrade a CPU, only enthusiasts, really. It sucks for those of us who ARE enthusiasts, but in the grand scheme of things, it won't have much (if any) of a negative impact.
Bigger issue is an dead MB taking your perfectly good CPU with it. Not that much of an issue if you can RMA, but once outisde of warranty, your $100-300mb can take out your $100-700 CPU (depending on your cpu, i7e is expensive)
Once they keep on integrating more and more of what was previously on the motherboard into the die with the CPU, the complexity of what remains on-board will diminish accordingly. So at least motherboard failure rates should decrease as well.
Most mb failures i see start of with the soundcard or a usb controller dying. I can't ever see the analog part of the audio controller ever being integrated.
There is another side to that. On one hand, changing sockets constantly really puts a hurt on the wallets of enthusiasts. On the other hand, keeping the same socket might limit how much you can improve the platform. Legacy is both an asset and a liability, after all.
Intel said that when they switch to BGA, which is after Haswell anyways, they will keep the enthusiast line socketed and the lower end CPUs will go BGA. So enthusiasts won't have to worry and general consumers can save a few cents.
I don't want to buy an $600 big-socket CPU that doesn't run any faster than a $300 smaller-socket CPU, just because I want the potential to upgrade the CPU or change motherboards but keep the CPU.
Those $600 CPU's are indeed faster than the cheaper $300 ones, otherwise there would be NO point in asking $600 for it. In gaming sure, the difference might be negligible, but games are becoming more and more heavily threaded anyway.
With Intel's CPU sockets the way they are now, you're already upgrading the socket every other generation anyway. This will only really hit the people who upgrade every cycle, ie. enthusiasts who already likely buy the higher-end parts.
That unfortunately would be one consequence of socketed MB. I agree that decoupling on low end cpus does not make sense but mid-end to high-end CPUs, there is no gain in cost savings as such. MB tend to go south due to capacitor blowouts and blownouts of some of the resistor circuits so the cpu is 100X more reliable than MB as such. Even corrosion of some joints costs a few MB to go south after some years of use if not cleaned often.
I personally never upgraded the CPU only. I always bought a MB and with it the CPU I liked. After several years I needed a faster PC. But those new CPUs weren't compatible with my old MB any longer, also did new MBs support better RAM/have new ports/features, thus I had and wanted to buy a new MB, too.
A new CPU generation won't be compatible with the old socket any longer, because Intel is heavily changing the CPU/GPU/Chipset model to a SoC, they integrated the power controllers already and so will they integrate other stuff, too, and they have to do it at a very fast speed, else they will be beaten by the ARM based SoCs, because integrating it on a SoC makes it faster, more power efficient, cheaper and smaller.
But because they know that there are some enthusiasts out there which spend all their money on the latest tech and upgrade almost daily to new tech, they'll have to still offer socketed models for those people, too, but they're probably only a minority (most people don't even have a desktop any longer, but an Ultrabook or All-in-one system)
Integrating things onto a SoC does not make them automatically faster, more power efficient, or cheaper. It usually does allow for a smaller overall system, but that is less a concern for PCs and tablets as it is for smart phones (or very small tablets).
On die integration does provide for a shorter (lower latency) path, but it is also more space limited which could reduce communications width. In bandwidth limited applications, using a higher bandwidth off die solution could be faster. As an example, a Pentium 4 would most certainly be better off with a dual 64bit channel memory solution than a single (probably 32bit) on die solution as its cache architecture was design to hide memory latency, but a lack of bandwidth would starve the chip.
On die integration can be (and usually is) more power efficient, but it does require redesigning the integrated component to work of the available power plane. You don't really save any power if you bring all of the auxiliary circuitry with it.
On die integration can make a system less expensive, but it can also make it more expensive. On the upside, you remove most of the cost of packaging a separate chip. On the down side, you increase the size (cost) of the chip you are integrating into. Which ends up cheaper is highly dependent on how big the chip is, how much the chip size increases, how mature the fabrication process is, and how big your silicon wafers are. Wafers are roughly fixed in cost, so the cost of individual chips depends directly on how many can be successfully fabricated per wafer. Large chips waste a lot of silicon given that rectangular dies don't match well with circular wafers. Also note that the number of defects on a wafer are not dependent on what is being fabricated. The probability per chip of a critical defect goes up exponentially as die size increases. Redundant resources can be disabled to prevent defects from trashing a chip entirely (think cache or GPU pipelines, unified shaders, etc). However, adding in these resources specifically to disable can be self defeating if the die area (and probability of defect) increases more than the redundant resource compensates. Given a small chip, an increase in size has a greater effect on number of chips per wafer, but does not increase the probability of critical defects nearly as significantly as with larger chips. Fabrication processes with high defect rates favor smaller chips as less silicon per ends up in the garbage. Given a lower defect rate, larger dies may be fabricated with a less significant increase in cost. It makes sense to integrate when the difference in package costs are higher than the difference in silicon costs.
While heavy integration generally leads to smaller overall packages, it can lead to chips with higher thermal densities. This in turn requires better cooling that can eat up the space you saved. This isn't much of a concern with lower power (and hence cooler) chips, but higher performance chips have to take this into account. Thankfully (or rather by design), the smart phone arena that needs smaller is also generally cooler.
The point is, ARM SoCs aren't magically going to get as fast as Intel desktop/laptop chips because they are highly integrated. If that were the case, they would've been there a long time ago. Further, if integration were a magic bullet, ARM would be in trouble as they've already seen the benefits and Intel is still working on it and thus still has benefits to receive. Integration may be a tool to allow ARM to get faster (or Intel to get more power efficient), but it is the Cortex A57/Haswell architectures designed to take advantage of this tool that does most of the work.
You talk a lot, a lot is right, but some things are wrong or just very near-sighted: Integration makes all I said, and it's more than just being smaller.
Your memory example is correct, but don't forget that I talked about what Intel will do in the future and not how it would have been in the past. They also won't put the memory on the same die, but on top of it, as it's common practice with ARM SoCs. And even NVidia announced to do this in future iterations because of the tremendous benefits. The advantage is not only that it's faster, but also the board design gets simplified.
Power: Sure you have to alter your design, but that's always the case. By integrating the power regulation you can switch faster and more precise, less power gets wasted in the transformation and less external components are required, so the board layout gets simpler, cheaper and smaller.
On die integration will make the particular component more exepensive, but the system cheaper, because you need less external space/parts/engineering. Your argument with the wafer is valid, and probably a difficulty Intel currently has.
Cooling: We don't live in Pentium IV times, which consumed insane amounts of power which made it difficult to cool the CPU. Todays GPUs consume more than any CPU, yet they are still pretty easy and silently to cool. Additionally gets it easier to cool it if everything sits at one place (as long as the overall power consumption remains in current regions). You don't have to use large heatpipes which spread across multiple distant chips, but can focus on cooling a single chip and place that one as close to the cooler as possible and cool this single one as good as possible. On a GPU you have to cool die and mosfets and RAM and guide the air flow properly. The GPU gets cooled properly with a copper core, the RAM often with just some extra alu heatsinks, the mosfets sometimes just with air. If everything is in one place you can focus on a single spot, don't have to save money on bad cooling of external components. The same in notebooks, if you can focus on one part, it gets much easier to keep it cool.
ARM/Intel: I never said that ARM will be faster than Intel just because of integration, I only said that ARM has a huge advantage because they have a much better integration, but I also believe that Intel can tune their efficiency by a much larger degree than ARM can, mainly because of the currently missing integration in Intel designs. On the other hand, ARM is several manufacturing processes behind (28nm vs. 22nm), so they can tune their efficiency that way easily. Sure, architecture improvements will make the bigger difference, but the other factors contribute a lot to efficiency and speed, too, and only allow some specific architecture changes.
Sockets are actually supposed to cost few dollars on a motherboard, $3-5 or so. That may actually be quite a bit depending on how much the motherboard costs to make.
In theory, someone less lazy than I should be able to figure out the size of that eDRAM package by the measurements of other known features (like the motherboard screw hole for instance), then using that as a reference to get the square area of the eDRAM package. That's what was done when the Wii U GPU was put under a microscope by chipworks, people figured out the size of the eDRAM as well as sRAM on-package.
We can get a reasonable guess as to die size, but as stated in the article the actual capacity depends upon both die size and RAM type/process.
As for sizes, the one component of reasonable size and 'known' dimensions in the shot are the tantalum surface mount capacitors - they appear to be 6.0x3.2mm for the black and 7.3x4.3mm for the yellow ones. From that we can guess that Intel is continuing to make their ICH a nice square dimension since the pix/mm derived from the capacitors works out to pretty much 20x20mm for the ICH. Lastly that can be used to give us a rough die size estimate of 260mm^2 for the CPU and 80mm^2 for the memory chip. (Probably accurate to within +/- 5% so long as my guess about the ICH dimensions is correct.)
Nice math there! Those numbers sound quite reasonable. Given that Ivy Bridge and Haswell are both 22nm process, but Haswell basically doubles the size of the graphics processor (and adds a little bit to each core), going up from 160mm^2 to 260mm^2 sounds plausible.
And speaking of 13" laptops, the GT3e seems like it would be perfect for the 13" Retina Macbook Pro, and supposedly Apple was the one pushing Intel to have eDRAM enhanced GPU versions. I wonder...
GT3e was almost certainly requested by Apple. If you've been reading between the lines on AT, you can see that Apple has been pushing Intel for better IGPs for several years so that they don't have to buy an extra chip from Nvidia/AMD.
That's what I was hinting at, me thinks the guys at Anandtech know something. But I wonder if it will be the GT3e in the 13" [retina] Macbook pro, or just the plain GT3.
Don't forget, Microsoft get slammed pretty badly by lousy Intel IGP a while back. One of the big problems with Vista was horrible performance on machines using Intel IGP. Intel didn't have any genuine DX9 hardware then and their driver did much of the DX9 work on the CPU. This not only meant lousy graphic performance but also took a lot of cycles away from other operations. But because Intel couldn't bear the idea that Vista would ship without a pure Intel desktop being able to get logo certification that included Aero support, they strongarmed Microsoft into accepting their 'do it on the CPU' approach.
Consequently, Microsoft got blamed for a lot of machines that performed horribly with Vista but show a huge improvement if a low-end but truly DX9 capable video card was installed. I 'fixed' a bunch of my clients' PCs by dropping in a $35 video card. They thought I'd performed a miracle.
It was around the time that Intel finally started getting a bit serious about their IGP. They don't need to be competitive in gaming but they do need to keep aware of how the minimum for GPU capability had advanced.
I'm betting that companies like Apple really want to get their hands on GT3e for use in high resolution notebooks. Driving those kinds of displays, the GPU is often the bottleneck, and getting onboard DRAM to speed up scaling operations would be nifty...
Yeah, the Retina MBPs seem like they were built with Haswell in mind, they still struggle after all the updates on basic UI animations like the calender flip or green button resize. I can literally count out the frames on the former on the Retina, while the weakest macbook air renders it fluidly.
Power consumption will be fine. DRAM doesn't consume much anyway, and this is a small array which could also be power gated. And might save power by using the main memory less.
Actually, I'd say you really won't be missing much. I'd expect GT3e to fit in a thin Retina MacBook Pro-like chassis, which isn't that much thicker or heavier than a MacBook Air. Probably an extra 0.5 lbs in weight.
That would be quite appealing. It doesn't have to have a retina-like display or be as light as an ultrabook, but a think 13" laptop with no optical drive and GT3e would be really nice.
Has Intel explicitly confirmed their intent to keep all Tocks available in socketed form; or is that just speculation based on their promise that Haswell wouldn't be the last socketed desktop chip?
Intel has already said that Haswell won't be the last socketed chip. Besides, for enthusiasts it makes it pretty simple which CPUs to get. Unless you really upgrade every year, a 2 or 4 year cadence is pretty decent anyway (besides I don't expect more than 10% per generation anyway).
Any idea if Haswell will support 30-bit (10 bits per component) displays? How about integrated 120Hz displays? Right now, Intel supports neither, so laptops with such displays can't use the Intel GPU.
I have played enough on SB and IB, to let me beliewe this will not give performance like 650M in real games and with similar quality. And absolutely not with the same consistency.
Surely gt3e is hopefully a really big step forward for cost and power, but please dont let fx. 3d mark vantage and low res performance stand in the way of real world performance for the average user. Jugde for consistent performance on a wide range of games.
Just looking at Anandtechs own bench comparison ( http://www.anandtech.com/bench/Product/580?vs=622 ) shows a difference between HD4000 and a 650M of a factor 3 to 4 in most games. Now if Haswell does nothing else besides going from 16 to 40 EUs, it is already making up a factor of 2.5. So if you allow that INTEL will also manage to improve efficiency and driver quality by a good deal, the 650M is definitely the level of performance that GT3 aims for.
I thought Intel themselves were saying "up to double" the performance of the HD4000. It wasn't clear if that was the GT3 with or without the eDRAM, but why have the number of the second lowest end chip as the "up to"?
Agree, but that asumes the driver development will be there. Also for more than the most popular games, and as times moves on.
Intels history shows, that it excactly them to show this will be the case. The old HD series is abandoned for driver support, the Atom line was abonned nearly before it hit market. Its a mess, and they leave customers with relatively far worse products than they bought.
Secondly if Broadwell comes with an entire new arch, as we predict, what will happen with develpment for the existing HD arch?
I wouldnt bet a second as a consumer on Intel instead of Nvidia or AMD, that the driver support will be there, before they have proved it even over a generation change.
Perhaps i am angry because the resent HD3000 video bug giving me chopping 23.xxx. Even my old NV 8600m gs was better, and the quality on dxva also looked better. Not to mention to mention a dirt cheap e350 APU beats it hands down on such a simple thing as video. My HD4000 machines look good on both desktop and ultrabook with no chopping, but the discrete card still looks to have better visual quality on both video and games on the same setting.
I dont know why Intel absolutely want that top speed, instead of improving quality. Its far more important for their brand in the long run. They have a job to do.
But all in all, i hope its good because as the nerd i am i always change my gear each year, so i buy whatever they do :)
This whole non-socketed strategy is going to fial in Intel's face. Dont forget that intel isnt jsut screwing the consumer here, Intel is gouging the oems too. What happens when they handle RMA's? 100 or 1000 motherboards a day or whatever the rate is for that sort of repair. That means they have to do something with 100 or 1000 cpus a day, or just throw them away. I'm sure many will get thrown away. But what will they do with them even if they keep them? It becomes a big money sucking pain in the butt process, costing millions of dollars. It is just going to make the PC industry implode faster, and intel right along with it.
I suspect it's rather the OEMs pushing for this, to save a few bucks per system.. and to sell more systems down the road (motherboard fails after warrenty, which you'd by now just replace).
How often did a CPU or motherboard fail for you? How often does a normal person replace the CPU or motherboard only. And how were OEMs able to handle all those notebooks, ultrabooks, tablets, all-in one systems which don't have a socket since several years. And why is the smartphone / tablet market one of the most profitable ones, yet they have everything integrated on a single board?
So far I've heard it's for the GPU only. It might be accessible through OpenCL, but the overhead might kill any performance gains over main memory. It would seem akward, though, to restrict its usage to the GPU, since the CPU sits right besides it and the ring bus could surely handle some more load. It could be an issue of unifying GPU and CPU adress space - a point Intel has not quite reached, to my knowledge.
No, that looks about right, remember for one it's on the same fab process as Ivy Bridge, and two it goes from 16 execution units on the GPU to 40, and the GPU was about half the die already on IVB. If you scale it, the math works out. The smaller die to the right will be the eDRAM.
I wonder if there will be a slight but noticeable uptick for AMD consumer desktop CPU sales during Intel's BGA only years. I know enthusiasts buying socketed CPUs are a tiny fraction of the pie, but it would be interesting if there was a some repeatable correlation between the two (in total units shipped, year over year growth/decline, something). They'd obviously need to be competitive price/performance wise, but it would be interesting to do some visualization of the data.
It's possible, but would those enthusiasts be willing to take such a hit to single threaded performance just for a socket? Maybe if Haswell doesn't move performance forward much and AMD continues to improve faster.
I really hope the 13" Pros (retina and non) can fit the GT3e in, especially the retina, the GPU seems like it was *made* for such computerss with high res in a small form factor, but the wattage from the models we've seen seems too high for the MBP which uses 35w processors.
I don't understand how BGA packages would be sold be OEMs. Wouldn't that have an impact on motherboard features and diversity? Intel currently has over 30 desktop CPUs and at least 3 popular chipsets, the four major motherboard OEMs have numerous models in different form-factors with different features based on different chipsets. That's a whole lot of combinations. What if I want to get a certain CPU with a certain motherboard with certain features? Say, low-voltage CPU with a mini-ITX motherboard with THX DSP.
Is there something I'm missing here? It seems that consumer choice would be vastly limited, even if they put out lots of predetermined combinations, I'd imagine availability would be a mess, even more so worldwide.
Correct me if I'm wrong, but does BGA packaging mean that Asus, MSI, etc. would have to sell you both the motherboard and the CPU solderer to it?
They'll likely limit the choices of processor available for a particular board. A high-end board will only be offered with high-end processors and low-end boards with the low-end CPUs.
If you could track down all of the sold $200+ motherboards with whizzy overclocking features and such, you'd probably find the CPUs used were fairly predictable and a small subset of the possible choices. In the case of the boards with OEM updates software, it probably reports back details like the CPU installed when it phones home to check for anything new to install. So the big board makers like ASUS probably have a good idea how the CPU choices for a give category of board work out.
It will be a hassle for some people but no significant change for others. I imagine at a place like Frys you'll no longer just grab a board off the shelf as the value jumps in relation to the size of the box. You'll probably have to get an invoice printed and pick it up when you pay for it, like a lot of very small relative to price items at retail.
Agreed, but as predictable as CPU/MB combos might be, there would always be off-norm scenarios. And those different scenarios combined would make up a not insignificant part of the whole. Not to mention availability which would be worse than now; I can't see how it won't. I also think that consumer system building diversity would inevitably suffer.
Let's hope that this would drive MB OEMs to offer better products (solid caps, etc.) with more universal features (THX/Dolby DSP, etc.) as standard. I would be okay with limited choice as long as it's adequate in this way; I my mind that's the biggest issue with BGA packaging, not upgrade-ability.
After all, what you or me are okay with doesn't really matter, we would all just have to adapt. That's the sad? reality. Let's just hope Intel and OEMs make the right calls.
I wonder what all those unconnected pads around the edge of the interposer are for.
Haswell's voltage regulator on die presumably needs to have some passive components, and if that's a BGA mount then there isn't space for them on the back ... I suppose Intel might have manufactured lots of test chips with various population options, and the real one will only have enough pads for the right number of passives.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
64 Comments
Back to Article
TheFuzz77 - Wednesday, April 10, 2013 - link
Anyone know why Intel is moving to the socketed/non model?tipoo - Wednesday, April 10, 2013 - link
Saves a few pennies per motherboard/CPU for the tradeoff of what a relatively tiny minority of PC users do.Guspaz - Wednesday, April 10, 2013 - link
Saves some money/space, simplifies cooling (you can do permanently installed heatsinks and such), increases reliability, and because there's not much demand for socketed processors. Very few people ever upgrade a CPU, only enthusiasts, really. It sucks for those of us who ARE enthusiasts, but in the grand scheme of things, it won't have much (if any) of a negative impact.stickmansam - Wednesday, April 10, 2013 - link
Bigger issue is an dead MB taking your perfectly good CPU with it. Not that much of an issue if you can RMA, but once outisde of warranty, your $100-300mb can take out your $100-700 CPU (depending on your cpu, i7e is expensive)Plimogz - Wednesday, April 10, 2013 - link
Once they keep on integrating more and more of what was previously on the motherboard into the die with the CPU, the complexity of what remains on-board will diminish accordingly. So at least motherboard failure rates should decrease as well.LauRoman - Wednesday, April 10, 2013 - link
Most mb failures i see start of with the soundcard or a usb controller dying. I can't ever see the analog part of the audio controller ever being integrated.stickmansam - Wednesday, April 10, 2013 - link
true, but then a dead CPU would take your mb out with it then :PI did recently upgrade 3 mbs running pentium 4's with decent chipsets to use C2D CPU's so upgrading is not unheard of
This is esp true with AMD more so than recent intel. 4 sockets for 5 tick/tocks is not good
Johnnyrock - Thursday, April 11, 2013 - link
There is another side to that. On one hand, changing sockets constantly really puts a hurt on the wallets of enthusiasts. On the other hand, keeping the same socket might limit how much you can improve the platform. Legacy is both an asset and a liability, after all.techdawg667 - Wednesday, April 10, 2013 - link
Intel said that when they switch to BGA, which is after Haswell anyways, they will keep the enthusiast line socketed and the lower end CPUs will go BGA. So enthusiasts won't have to worry and general consumers can save a few cents.Ktracho - Thursday, April 11, 2013 - link
I don't want to buy an $600 big-socket CPU that doesn't run any faster than a $300 smaller-socket CPU, just because I want the potential to upgrade the CPU or change motherboards but keep the CPU.StevoLincolnite - Friday, April 12, 2013 - link
Those $600 CPU's are indeed faster than the cheaper $300 ones, otherwise there would be NO point in asking $600 for it.In gaming sure, the difference might be negligible, but games are becoming more and more heavily threaded anyway.
microsoftenator - Monday, April 15, 2013 - link
With Intel's CPU sockets the way they are now, you're already upgrading the socket every other generation anyway. This will only really hit the people who upgrade every cycle, ie. enthusiasts who already likely buy the higher-end parts.fteoath64 - Tuesday, April 16, 2013 - link
That unfortunately would be one consequence of socketed MB. I agree that decoupling on low end cpus does not make sense but mid-end to high-end CPUs, there is no gain in cost savings as such. MB tend to go south due to capacitor blowouts and blownouts of some of the resistor circuits so the cpu is 100X more reliable than MB as such. Even corrosion of some joints costs a few MB to go south after some years of use if not cleaned often.UpSpin - Thursday, April 11, 2013 - link
I personally never upgraded the CPU only. I always bought a MB and with it the CPU I liked.After several years I needed a faster PC. But those new CPUs weren't compatible with my old MB any longer, also did new MBs support better RAM/have new ports/features, thus I had and wanted to buy a new MB, too.
A new CPU generation won't be compatible with the old socket any longer, because Intel is heavily changing the CPU/GPU/Chipset model to a SoC, they integrated the power controllers already and so will they integrate other stuff, too, and they have to do it at a very fast speed, else they will be beaten by the ARM based SoCs, because integrating it on a SoC makes it faster, more power efficient, cheaper and smaller.
But because they know that there are some enthusiasts out there which spend all their money on the latest tech and upgrade almost daily to new tech, they'll have to still offer socketed models for those people, too, but they're probably only a minority (most people don't even have a desktop any longer, but an Ultrabook or All-in-one system)
JPForums - Thursday, April 11, 2013 - link
Integrating things onto a SoC does not make them automatically faster, more power efficient, or cheaper. It usually does allow for a smaller overall system, but that is less a concern for PCs and tablets as it is for smart phones (or very small tablets).On die integration does provide for a shorter (lower latency) path, but it is also more space limited which could reduce communications width. In bandwidth limited applications, using a higher bandwidth off die solution could be faster. As an example, a Pentium 4 would most certainly be better off with a dual 64bit channel memory solution than a single (probably 32bit) on die solution as its cache architecture was design to hide memory latency, but a lack of bandwidth would starve the chip.
On die integration can be (and usually is) more power efficient, but it does require redesigning the integrated component to work of the available power plane. You don't really save any power if you bring all of the auxiliary circuitry with it.
On die integration can make a system less expensive, but it can also make it more expensive. On the upside, you remove most of the cost of packaging a separate chip. On the down side, you increase the size (cost) of the chip you are integrating into. Which ends up cheaper is highly dependent on how big the chip is, how much the chip size increases, how mature the fabrication process is, and how big your silicon wafers are. Wafers are roughly fixed in cost, so the cost of individual chips depends directly on how many can be successfully fabricated per wafer. Large chips waste a lot of silicon given that rectangular dies don't match well with circular wafers. Also note that the number of defects on a wafer are not dependent on what is being fabricated. The probability per chip of a critical defect goes up exponentially as die size increases. Redundant resources can be disabled to prevent defects from trashing a chip entirely (think cache or GPU pipelines, unified shaders, etc). However, adding in these resources specifically to disable can be self defeating if the die area (and probability of defect) increases more than the redundant resource compensates. Given a small chip, an increase in size has a greater effect on number of chips per wafer, but does not increase the probability of critical defects nearly as significantly as with larger chips. Fabrication processes with high defect rates favor smaller chips as less silicon per ends up in the garbage. Given a lower defect rate, larger dies may be fabricated with a less significant increase in cost. It makes sense to integrate when the difference in package costs are higher than the difference in silicon costs.
While heavy integration generally leads to smaller overall packages, it can lead to chips with higher thermal densities. This in turn requires better cooling that can eat up the space you saved. This isn't much of a concern with lower power (and hence cooler) chips, but higher performance chips have to take this into account. Thankfully (or rather by design), the smart phone arena that needs smaller is also generally cooler.
The point is, ARM SoCs aren't magically going to get as fast as Intel desktop/laptop chips because they are highly integrated. If that were the case, they would've been there a long time ago. Further, if integration were a magic bullet, ARM would be in trouble as they've already seen the benefits and Intel is still working on it and thus still has benefits to receive. Integration may be a tool to allow ARM to get faster (or Intel to get more power efficient), but it is the Cortex A57/Haswell architectures designed to take advantage of this tool that does most of the work.
UpSpin - Thursday, April 11, 2013 - link
You talk a lot, a lot is right, but some things are wrong or just very near-sighted:Integration makes all I said, and it's more than just being smaller.
Your memory example is correct, but don't forget that I talked about what Intel will do in the future and not how it would have been in the past. They also won't put the memory on the same die, but on top of it, as it's common practice with ARM SoCs. And even NVidia announced to do this in future iterations because of the tremendous benefits. The advantage is not only that it's faster, but also the board design gets simplified.
Power: Sure you have to alter your design, but that's always the case. By integrating the power regulation you can switch faster and more precise, less power gets wasted in the transformation and less external components are required, so the board layout gets simpler, cheaper and smaller.
On die integration will make the particular component more exepensive, but the system cheaper, because you need less external space/parts/engineering. Your argument with the wafer is valid, and probably a difficulty Intel currently has.
Cooling: We don't live in Pentium IV times, which consumed insane amounts of power which made it difficult to cool the CPU. Todays GPUs consume more than any CPU, yet they are still pretty easy and silently to cool. Additionally gets it easier to cool it if everything sits at one place (as long as the overall power consumption remains in current regions). You don't have to use large heatpipes which spread across multiple distant chips, but can focus on cooling a single chip and place that one as close to the cooler as possible and cool this single one as good as possible. On a GPU you have to cool die and mosfets and RAM and guide the air flow properly. The GPU gets cooled properly with a copper core, the RAM often with just some extra alu heatsinks, the mosfets sometimes just with air. If everything is in one place you can focus on a single spot, don't have to save money on bad cooling of external components. The same in notebooks, if you can focus on one part, it gets much easier to keep it cool.
ARM/Intel: I never said that ARM will be faster than Intel just because of integration, I only said that ARM has a huge advantage because they have a much better integration, but I also believe that Intel can tune their efficiency by a much larger degree than ARM can, mainly because of the currently missing integration in Intel designs. On the other hand, ARM is several manufacturing processes behind (28nm vs. 22nm), so they can tune their efficiency that way easily.
Sure, architecture improvements will make the bigger difference, but the other factors contribute a lot to efficiency and speed, too, and only allow some specific architecture changes.
IntelUser2000 - Thursday, April 11, 2013 - link
Sockets are actually supposed to cost few dollars on a motherboard, $3-5 or so. That may actually be quite a bit depending on how much the motherboard costs to make.tipoo - Wednesday, April 10, 2013 - link
I wonder how much power consumption that eDRAM die adds? Would the mobile version be suitable for 13" non-ultrabook laptops?tipoo - Wednesday, April 10, 2013 - link
In theory, someone less lazy than I should be able to figure out the size of that eDRAM package by the measurements of other known features (like the motherboard screw hole for instance), then using that as a reference to get the square area of the eDRAM package. That's what was done when the Wii U GPU was put under a microscope by chipworks, people figured out the size of the eDRAM as well as sRAM on-package.Khato - Wednesday, April 10, 2013 - link
We can get a reasonable guess as to die size, but as stated in the article the actual capacity depends upon both die size and RAM type/process.As for sizes, the one component of reasonable size and 'known' dimensions in the shot are the tantalum surface mount capacitors - they appear to be 6.0x3.2mm for the black and 7.3x4.3mm for the yellow ones. From that we can guess that Intel is continuing to make their ICH a nice square dimension since the pix/mm derived from the capacitors works out to pretty much 20x20mm for the ICH. Lastly that can be used to give us a rough die size estimate of 260mm^2 for the CPU and 80mm^2 for the memory chip. (Probably accurate to within +/- 5% so long as my guess about the ICH dimensions is correct.)
frogger4 - Thursday, April 11, 2013 - link
Nice math there! Those numbers sound quite reasonable. Given that Ivy Bridge and Haswell are both 22nm process, but Haswell basically doubles the size of the graphics processor (and adds a little bit to each core), going up from 160mm^2 to 260mm^2 sounds plausible.tipoo - Wednesday, April 10, 2013 - link
And speaking of 13" laptops, the GT3e seems like it would be perfect for the 13" Retina Macbook Pro, and supposedly Apple was the one pushing Intel to have eDRAM enhanced GPU versions. I wonder...A5 - Wednesday, April 10, 2013 - link
GT3e was almost certainly requested by Apple. If you've been reading between the lines on AT, you can see that Apple has been pushing Intel for better IGPs for several years so that they don't have to buy an extra chip from Nvidia/AMD.tipoo - Wednesday, April 10, 2013 - link
That's what I was hinting at, me thinks the guys at Anandtech know something. But I wonder if it will be the GT3e in the 13" [retina] Macbook pro, or just the plain GT3.epobirs - Monday, April 15, 2013 - link
Don't forget, Microsoft get slammed pretty badly by lousy Intel IGP a while back. One of the big problems with Vista was horrible performance on machines using Intel IGP. Intel didn't have any genuine DX9 hardware then and their driver did much of the DX9 work on the CPU. This not only meant lousy graphic performance but also took a lot of cycles away from other operations. But because Intel couldn't bear the idea that Vista would ship without a pure Intel desktop being able to get logo certification that included Aero support, they strongarmed Microsoft into accepting their 'do it on the CPU' approach.Consequently, Microsoft got blamed for a lot of machines that performed horribly with Vista but show a huge improvement if a low-end but truly DX9 capable video card was installed. I 'fixed' a bunch of my clients' PCs by dropping in a $35 video card. They thought I'd performed a miracle.
It was around the time that Intel finally started getting a bit serious about their IGP. They don't need to be competitive in gaming but they do need to keep aware of how the minimum for GPU capability had advanced.
Guspaz - Wednesday, April 10, 2013 - link
I'm betting that companies like Apple really want to get their hands on GT3e for use in high resolution notebooks. Driving those kinds of displays, the GPU is often the bottleneck, and getting onboard DRAM to speed up scaling operations would be nifty...tipoo - Thursday, April 11, 2013 - link
Yeah, the Retina MBPs seem like they were built with Haswell in mind, they still struggle after all the updates on basic UI animations like the calender flip or green button resize. I can literally count out the frames on the former on the Retina, while the weakest macbook air renders it fluidly.MrSpadge - Thursday, April 11, 2013 - link
Power consumption will be fine. DRAM doesn't consume much anyway, and this is a small array which could also be power gated. And might save power by using the main memory less.Tams80 - Sunday, April 14, 2013 - link
That would be perfect.dillonnotz24 - Wednesday, April 10, 2013 - link
Well...there goes my dream of a gaming ultrabook...jeffkibuule - Wednesday, April 10, 2013 - link
Actually, I'd say you really won't be missing much. I'd expect GT3e to fit in a thin Retina MacBook Pro-like chassis, which isn't that much thicker or heavier than a MacBook Air. Probably an extra 0.5 lbs in weight.tipoo - Thursday, April 11, 2013 - link
That would be quite appealing. It doesn't have to have a retina-like display or be as light as an ultrabook, but a think 13" laptop with no optical drive and GT3e would be really nice.DanNeely - Wednesday, April 10, 2013 - link
Has Intel explicitly confirmed their intent to keep all Tocks available in socketed form; or is that just speculation based on their promise that Haswell wouldn't be the last socketed desktop chip?jeffkibuule - Wednesday, April 10, 2013 - link
Intel has already said that Haswell won't be the last socketed chip. Besides, for enthusiasts it makes it pretty simple which CPUs to get. Unless you really upgrade every year, a 2 or 4 year cadence is pretty decent anyway (besides I don't expect more than 10% per generation anyway).glugglug - Wednesday, April 10, 2013 - link
They have already stated that while Broadwell will be soldered only, its successor Skylar will be available in both socketed and soldered form.I don't know about any statements on what comes after Skylar.
bakedpatato - Wednesday, April 10, 2013 - link
I will be very impressed if GT3e is as fast as the 650M but to be fair the HD4000 is faster than the GO 7900GTX in my old Inspiron...glugglug - Wednesday, April 10, 2013 - link
I was severely disappointed when I read a couple months ago that the GT3 with the integrated DRAM would only be available as a BGA soldered part.Are we now saying that is incorrect?
frogger4 - Thursday, April 11, 2013 - link
Nope saying the same thing still. You can see in the picture above, it's a no-socket package there :/DanaGoyette - Thursday, April 11, 2013 - link
Any idea if Haswell will support 30-bit (10 bits per component) displays?How about integrated 120Hz displays?
Right now, Intel supports neither, so laptops with such displays can't use the Intel GPU.
krumme - Thursday, April 11, 2013 - link
I have played enough on SB and IB, to let me beliewe this will not give performance like 650M in real games and with similar quality. And absolutely not with the same consistency.Surely gt3e is hopefully a really big step forward for cost and power, but please dont let fx. 3d mark vantage and low res performance stand in the way of real world performance for the average user. Jugde for consistent performance on a wide range of games.
ShieTar - Thursday, April 11, 2013 - link
Just looking at Anandtechs own bench comparison ( http://www.anandtech.com/bench/Product/580?vs=622 ) shows a difference between HD4000 and a 650M of a factor 3 to 4 in most games. Now if Haswell does nothing else besides going from 16 to 40 EUs, it is already making up a factor of 2.5. So if you allow that INTEL will also manage to improve efficiency and driver quality by a good deal, the 650M is definitely the level of performance that GT3 aims for.tipoo - Thursday, April 11, 2013 - link
I thought Intel themselves were saying "up to double" the performance of the HD4000. It wasn't clear if that was the GT3 with or without the eDRAM, but why have the number of the second lowest end chip as the "up to"?krumme - Thursday, April 11, 2013 - link
Agree, but that asumes the driver development will be there. Also for more than the most popular games, and as times moves on.Intels history shows, that it excactly them to show this will be the case. The old HD series is abandoned for driver support, the Atom line was abonned nearly before it hit market. Its a mess, and they leave customers with relatively far worse products than they bought.
Secondly if Broadwell comes with an entire new arch, as we predict, what will happen with develpment for the existing HD arch?
I wouldnt bet a second as a consumer on Intel instead of Nvidia or AMD, that the driver support will be there, before they have proved it even over a generation change.
Perhaps i am angry because the resent HD3000 video bug giving me chopping 23.xxx. Even my old NV 8600m gs was better, and the quality on dxva also looked better. Not to mention to mention a dirt cheap e350 APU beats it hands down on such a simple thing as video. My HD4000 machines look good on both desktop and ultrabook with no chopping, but the discrete card still looks to have better visual quality on both video and games on the same setting.
I dont know why Intel absolutely want that top speed, instead of improving quality. Its far more important for their brand in the long run. They have a job to do.
But all in all, i hope its good because as the nerd i am i always change my gear each year, so i buy whatever they do :)
TempestDash - Thursday, April 11, 2013 - link
Put this into an NUC and I'll be yours forever, Intel.Shadowmaster625 - Thursday, April 11, 2013 - link
This whole non-socketed strategy is going to fial in Intel's face. Dont forget that intel isnt jsut screwing the consumer here, Intel is gouging the oems too. What happens when they handle RMA's? 100 or 1000 motherboards a day or whatever the rate is for that sort of repair. That means they have to do something with 100 or 1000 cpus a day, or just throw them away. I'm sure many will get thrown away. But what will they do with them even if they keep them? It becomes a big money sucking pain in the butt process, costing millions of dollars. It is just going to make the PC industry implode faster, and intel right along with it.MrSpadge - Thursday, April 11, 2013 - link
I suspect it's rather the OEMs pushing for this, to save a few bucks per system.. and to sell more systems down the road (motherboard fails after warrenty, which you'd by now just replace).UpSpin - Thursday, April 11, 2013 - link
How often did a CPU or motherboard fail for you? How often does a normal person replace the CPU or motherboard only. And how were OEMs able to handle all those notebooks, ultrabooks, tablets, all-in one systems which don't have a socket since several years. And why is the smartphone / tablet market one of the most profitable ones, yet they have everything integrated on a single board?MrSpadge - Thursday, April 11, 2013 - link
To me it would sound lovely to use the eDRAM as a L4 cache for the number-crunching CPU!tipoo - Thursday, April 11, 2013 - link
I wonder if it can be used like that? Is it dynamically split between CPU and GPU if the GPU isn't using it?MrSpadge - Friday, April 12, 2013 - link
So far I've heard it's for the GPU only. It might be accessible through OpenCL, but the overhead might kill any performance gains over main memory. It would seem akward, though, to restrict its usage to the GPU, since the CPU sits right besides it and the ring bus could surely handle some more load. It could be an issue of unifying GPU and CPU adress space - a point Intel has not quite reached, to my knowledge.Timothy003 - Friday, April 12, 2013 - link
The big square die looks too large to be a CPU die. I think Haswell is actually the smaller one.tipoo - Friday, April 12, 2013 - link
No, that looks about right, remember for one it's on the same fab process as Ivy Bridge, and two it goes from 16 execution units on the GPU to 40, and the GPU was about half the die already on IVB. If you scale it, the math works out. The smaller die to the right will be the eDRAM.Bob Todd - Friday, April 12, 2013 - link
I wonder if there will be a slight but noticeable uptick for AMD consumer desktop CPU sales during Intel's BGA only years. I know enthusiasts buying socketed CPUs are a tiny fraction of the pie, but it would be interesting if there was a some repeatable correlation between the two (in total units shipped, year over year growth/decline, something). They'd obviously need to be competitive price/performance wise, but it would be interesting to do some visualization of the data.tipoo - Friday, April 12, 2013 - link
It's possible, but would those enthusiasts be willing to take such a hit to single threaded performance just for a socket? Maybe if Haswell doesn't move performance forward much and AMD continues to improve faster.MrSpadge - Friday, April 12, 2013 - link
Surely not if Haswell can hold it's ground against any 1x nm AMD chips, due out.. who knows!FITCamaro - Friday, April 12, 2013 - link
I would love to see the highest end model in a Surface Pro.tipoo - Friday, April 12, 2013 - link
The model with GT3e was expected to have a 55W TDP, wasn't it? Not for mobile, the Surface Pro uses a 17 watt CPU.slim142 - Monday, April 15, 2013 - link
Good point.Gotta watch GT3e TDP. Whether is 55w or not, I think anything above 35 for a rMBP 13" (for example) would be too much.
Im waiting for haswell to buy a rMBP 13 but I want to see how apple is gonna set them up and read the first reviews (temps especially).
tipoo - Wednesday, April 17, 2013 - link
I really hope the 13" Pros (retina and non) can fit the GT3e in, especially the retina, the GPU seems like it was *made* for such computerss with high res in a small form factor, but the wattage from the models we've seen seems too high for the MBP which uses 35w processors.mikk - Monday, April 15, 2013 - link
47W for GT3e Notebook and 65W for GT3e desktop.yhselp - Monday, April 15, 2013 - link
I don't understand how BGA packages would be sold be OEMs. Wouldn't that have an impact on motherboard features and diversity? Intel currently has over 30 desktop CPUs and at least 3 popular chipsets, the four major motherboard OEMs have numerous models in different form-factors with different features based on different chipsets. That's a whole lot of combinations. What if I want to get a certain CPU with a certain motherboard with certain features? Say, low-voltage CPU with a mini-ITX motherboard with THX DSP.Is there something I'm missing here? It seems that consumer choice would be vastly limited, even if they put out lots of predetermined combinations, I'd imagine availability would be a mess, even more so worldwide.
Correct me if I'm wrong, but does BGA packaging mean that Asus, MSI, etc. would have to sell you both the motherboard and the CPU solderer to it?
epobirs - Monday, April 15, 2013 - link
They'll likely limit the choices of processor available for a particular board. A high-end board will only be offered with high-end processors and low-end boards with the low-end CPUs.If you could track down all of the sold $200+ motherboards with whizzy overclocking features and such, you'd probably find the CPUs used were fairly predictable and a small subset of the possible choices. In the case of the boards with OEM updates software, it probably reports back details like the CPU installed when it phones home to check for anything new to install. So the big board makers like ASUS probably have a good idea how the CPU choices for a give category of board work out.
It will be a hassle for some people but no significant change for others. I imagine at a place like Frys you'll no longer just grab a board off the shelf as the value jumps in relation to the size of the box. You'll probably have to get an invoice printed and pick it up when you pay for it, like a lot of very small relative to price items at retail.
yhselp - Wednesday, April 17, 2013 - link
Agreed, but as predictable as CPU/MB combos might be, there would always be off-norm scenarios. And those different scenarios combined would make up a not insignificant part of the whole. Not to mention availability which would be worse than now; I can't see how it won't. I also think that consumer system building diversity would inevitably suffer.Let's hope that this would drive MB OEMs to offer better products (solid caps, etc.) with more universal features (THX/Dolby DSP, etc.) as standard. I would be okay with limited choice as long as it's adequate in this way; I my mind that's the biggest issue with BGA packaging, not upgrade-ability.
After all, what you or me are okay with doesn't really matter, we would all just have to adapt. That's the sad? reality. Let's just hope Intel and OEMs make the right calls.
Tom Womack - Monday, April 15, 2013 - link
I wonder what all those unconnected pads around the edge of the interposer are for.Haswell's voltage regulator on die presumably needs to have some passive components, and if that's a BGA mount then there isn't space for them on the back ... I suppose Intel might have manufactured lots of test chips with various population options, and the real one will only have enough pads for the right number of passives.