It took until 2018 to see 10nm in any actual shipping processors from Intel (original date was expected in 2016). Actual mainstream release of 10nm consumer chips has been delayed until "holiday 2019" and 10nm server chips are still on the roadmap for 2020 at the earliest.
Intel is giving AMD every opportunity to catch up and I'm glad to see that AMD is taking advantage of it. These sort of delays are allowing AMD to claw back market share and the closer the scale gets to equilibrium the better the competition will be. It is a good time to be a consumer.
I don't know why you would ahve to say that it is a good time to be a consumer, for the prices to go down? From my core 2 duo e5200 that could overclock like crazy for pennies to my current i5 2500k that could overclock like crazy for pennies while AMD wasn't in the competition AT ALL I didn't feel like it was a bad time to be a consumer. Just look at the horizon now, every product that goes out, doesn't cut prices down like almost at all. Unless we're speaking about older ''non competing'' AMD processors pre Ryzen, pretty much every cpu from i5 2500k to i7 7700k hasn'T been going down in price much even tho ryzen brought some competition to them.
From now on, companies like intell, Nvidia and AMD communicate between each other to avoid things like radeon 4870 super performance for the price to kill GTX 260 overpriced same performance to happen again. Just look with your own eyes what happened since then. Everything prices pretty much where it will stay for a while. Every product seems to land where it's supposed to be never undercutting prices from current products more than let's say 50$ if so.
The communication you are talking about is called collusion and it is illegal. Also, I doubt that Nvidia and AMD are colluding, Nvidia seems to be doing everything in their power to crush AMD, even shady stuff like putting pressure on third parties to align their brands exclusively to them, and limiting Nvidia GPU performance when used with Ryzen CPUs.
"From my core 2 duo e5200 that could overclock like crazy for pennies to my current i5 2500k that could overclock like crazy for pennies while AMD wasn't in the competition AT ALL I didn't feel like it was a bad time to be a consumer."
IT IS a bad time to be a consumer. You said it yourself
"pretty much every cpu from i5 2500k to i7 7700k hasn'T been going down in price"
THAT'S the effect of no competition. After Ryzen, even Coffee Lake i3 almost comparable to Kaby Lake i5.
Yup. Prices have gone down finally- an intel quad-core at 4 ghz is cheaper today by a huge margin compared to 2 years ago... the gap is even bigger for 6 and 8 cores.
I don't recall an i5 ever being worth "pennies" either. You could overclock the Core 2 e5200, not so with anything other than a K-series Intel CPU since then.
His entire comment contradicts itself. Oh, the fine art of unpaid shilling. ^_^
@Galid, up to 2006-2007 AMD was still more or less in the play. And the CPUs that were launched shortly after (2008-2010) were still designed by Intel with AMD's pressure in mind. The price also reflected that pressure.
The CPUs you are talking about were launched ~10, and 7+ years ago respectively. The 2500K at over $200 so not really pennies. I can draw you a picture of how prices have steadily gone up and performance improvements YoY decreased as Intel was further and further away from those times AMD put pressure on them. You'd still miss the point. And the line...
But it's "great" that companies can communicate to avoid "fiascos" like when one of them launches a great product with great performance and price. We definitely don't want that. Nope. Tragedy averted, customers rejoice.
@mapesdhs there's OC and OC. First off, you need the much more expensive LGA2011 platform to do any BCLK OC. "Regular" LGA1155 got about... 3% OC headroom. Impressive. Second, even so they still never reached the OC levels of the K parts. And third, a very small fraction of Intel's SKUs actually managed this at all so the 3820 really doesn't prove any point.
@everyone else, Galid isn't shilling, it's just painful lack of education.
"pretty much every cpu from i5 2500k to i7 7700k hasn'T been going down in price"
lol, you could look back 18 yrs to the Pentium 550e (the hotshot I5 equivalent back in early 2000) ran around the same price new as either of those processors.
Back in the day Intel was on 14nm and Apple was on 20nm. Right now Apple is plannining the release of 7nm chips and Intel is still on 14nm. I do not care how you meassure and if Intels 14nm are smaller than Apple's. The fact on the matter is that in the recent years Apple has shrunk its processes 8x and Intel just stood still. That is amazing in all the wrong ways for Intel, and kudos to Apple for actually moving things forward.
Apple just uses the process technology of other companies, they didn't develop it. If anyone, you should be praising TSMC for steady progress in the process-department, not Apple.
(The same can be translated for AMD, and anyone else really, since Intel is a rare occurance with both designing and making the chips themselves.)
Also, it absolutely does matter "how you measure" it. Because the smaller you get, the harder it is to make advances. If someone else is behind the curve, they'll have an easier time catching up then the frontrunners making new developments.
This is not meant to excuse the delays Intel faced, I just don't know what went wrong there, if they were too ambitious with their goals and it backfired or whatever.
Transistor sizes are essentially frozen at 16nm, FinFET and all the other technologies to allow die shrinks just dope the transistors in a way that opens them back up to 16nm. Based on this I'd say it's pretty likely that even if we can continue to shrink the trace sizes without quantum effects blowing up, the transistor itself just can't operate below 16nm because the electrons keep tunneling out.
I'd also like to point out that Intel's 10nm chips "shipping" are a joke, it was a limited run of like 5k chips that barely work, generate more heat and less computing power than even the lowest grade processor from the previous generation. Intel's 10nm was a marking gimmick and not in the least real. AMD has not only caught up to Intel at this point, by 2020 they should be at least one process node ahead of Intel and the performance leader. Intel's response has been to raise prices again to compensate for the revenue AMD is going to take from them.
Frankly it's astounding how bad it is for Intel right now. It's also astounding how few heads have rolled at Intel for the 10nm debacle.
Assuming people believe that GF's 7nm process is any more advanced than Intel's 10nm process. Regardless of name, don't let that sway you into thinking it's actually going to be superior, at least not until mass production starts on both nodes. 7nm might end up being better, it might not clock as high, no idea. Historically, Intel's process has been better at the same node, and it's only now that GF are really competitive (after years of shafting AMD).
There's a table on this link which is a rather brief comparison between the competing processes:
Intel initially revealed that its own 10nm process was "within" 17% of the best 7nm process of its competitors. Within means they are not ahead but behind. AMD has access to 7nm production from both GF and TSMC so it's likely they have their pick from at least 4 variants of 7nm, one for HP (high frequency) and one for LP from each fab. They are going to use the best variant for the best chip use case.
And according to Semiaccurate who claims access to internal documents from Intel (and stated for some time that 10nm Ice Lake was postponed again for late 2020), the next 10nm chips that are going to be released are going to be more like "12nm" (14nm++ with some 10nm features) because the initial 10nm process was never going to yield big enough dies for desktops or servers
Intel's 10nm is about the same size as the others' 7nm. Its performance (power and/or frequency) may be higher, as is typical of Intel's processes as they often build in some extra features that are complicated and hard to design for but improve performance.
The foundries can't really do this, as this would make their product harder to use and more expensive.
The reason why the numbers are off is that when the foundries went from 20nm to 16/14 they _DID NOT SHRINK_ they just added fin-fets (mostly, there were other minor changes). So now they were "14nm" but really barely smaller than intel's 22nm. Intel's 14nm was quite a bit smaller than those. For a variety of reasons, the measurements between foundries and various fabs can no longer really be described well with one number like '7nm'.
Really, just look at how much die space 256K of cache takes up, and how much space some logic takes up and compare that. What is important to compare is density. Performance / Power are also very important but aren't things that the 'nm' number can indicate anymore. Remember when TSMC went from 28nm planar to 20nm planar? Remember how Nvidia and AMD did not bother to use it because it only provided density increases and no performance increase? Yeah, the 'nm' of a process is now the least important thing.
"Historically, Intel's process has been better at the same node, and it's only now that GF are really competitive (after years of shafting AMD)."
I'm not so sure of that, but largely because Intel has been a node ahead of *everybody*, not just GF/AMD. Also AMD largely screwed itself with the Phenom, Bulldozer, Vega and other lousy designs.
IBM traditionally has been able to produce at nodes similar to Intel, but only for extremely low volume (and they were presumably indifferent to yields, as these go into ultra-high margin machines).
Having TSMC (and hopefully GF) catch up to Intel while AMD has the design side of things working again (rumors that Vega was sacrificed to the Sony-financed Navi imply that Navi might be pretty good, and Zen is amazingly strong) might bring back the strongest competition since Intel tried to use Itanium and Pentium4 to compete with Athlon64 (no process advantage in the world could fix that).
Not sure what you mean with "Transistor sizes are essentially frozen at 16nm" - transistors are scaling extremely well, much better than metal. For example Samsung's 7nm uses a fin pitch of 27nm which is almost perfect 2x scaling from their 14nm. The fin widths are already much smaller than 16nm. The foundries are very confident about 5nm and even 3nm, so we're not near the end of scaling.
TSMC obviously deserve massive credit, and you'd be a fool to deny that. BUT their job HAS been made a lot easier by having a large customer with deep pockets, a stable schedule, and a willingness to pay for the leading edge. We know that much of Foxconn's growth has been through innovative combined Apple+Foxconn financing of new equipment, and it is likely that similar arrangements have occurred with TSMC.
So TSMC gets the tech credit, but Apple probably deserves some financial credit.
Apple (like AMD and Nvidia) is a fabless company, the improved fabrication for CPU's and SOC's are mostly coming from TSMC, Global Foundries and Samsung.
Intel's 14nm is much smaller than everyone else's 14nm. Is this similarly true for 10nm vs 7nm? If they are equivalent (my suspicion), then Intel is still more than a year behind (very significant), but aren't 2 nodes behind like the label suggests. Is there any reliable source to compare?
Intel's 14nm has higher theoretical density than other 14nm processes indeed, however Intel themselves admitted that TSMC 20nm chips have better density than Intel 14nm (slide 18): http://intelstudios.edgesuite.net/im/2015/pdf/2015...
The theoretical densities of Intel 10nm and TSMC/GF/SS 7nm are very close. However the 10nm that will ship will likely be a ++ process with significantly relaxed pitches to fix the yield and performance issues. So they won't get anywhere near the claimed 100 million transistors/mm^2. Also remember when 10nm is released, the foundries will be on their 7nm+ processes which improve density further.
So Intel will remain significantly behind on both theoretical and actual density for the foreseeable future.
This. The 10nm shipping to consumers next year is NOT the 10nm Intel had originally planned & spec'ed out. The new "node" is more like 12nm (using Intel's own measurement definitions). Either way, it's ALL bad news.
This is going to be an unpopular opinion on anandtech.
The 7nm is better than Intel 14nm+++, but it is not miles ahead as you would imagine by the numbers. The Intel 10nm on paper and initial chip are actually much better than TSMC 7nm. But of coz that is an apple to orange comparison because Intel 10nm isn't even ready for HVM.
The Apple 7nm are also custom and made to yield, so it is not exactly the same as normal TSMC 7nm everyone else are getting.
So there is no denying that Apple and TSMC for the first time ever will likely to have better transistor than Intel, but it is still not 14 vs 7 difference. One shouldn't be all hyped up and ignoring all the technical and business details behind it.
I hate to burst your bubble but Intel hasn't had the best transistors for some time now. Centriq on 10nm was shown to be both faster and far more power efficient than Skylake. 7nm is only going to widen the gap further...
With these news they will compete with Epic 3 which is the next after Rome sad news for intel.I think they are in the worst situation i can remember.The only hope is Jim Keler pull another rabbit from his hat but thats for 2021 or later
Anybody know what the 'b' in bfloat16 stands for? Normally, I'd guess "binary", but IEEE floating point already has naming conventions. For binary floating point, we have "half", "single", "double", and "quad" for 16-, 32-, 64-, and 128-bit binary floating point respectively. For decimal floating point, we have "decimal32", "decimal64", etc.
"...Cascade Lake server platform, which will feature CPUs that bring support for hardware security mitigations against side-channel attacks through partitioning."
I'd love to see some more details regarding this...
Note: 32 bit floating point can't be called "high precision".
Ok, it might be "high precision" when the numbers go in and you might be fooled into thinking they are when they come out, but any kind of heavy math simulation will need 64 bit numbers. 32 bit numbers might also be overkill for graphics, but they use them anyway.
- If you don't believe me, try doing a 32k point FFT on some audio (which is only 16 bit) and back in single point. The audio won't have 10 bits of accuracy when you are done, and each point was only involved in ~15 operations.
Lets not forget Intel is raising prices again to keep their revenue the same while AMD takes marketshare. The top end Xeon Platinum is going from $13K to $20K where the AMD Rome that's comparable (Rome will be faster) will be 40% of that price.
Intel has some long dark days ahead of it that will be comparable to what they experienced with Pentium 4. They emptied their bench to stay competitive with Epyc and Threadripper and they've got nothing going for the future because of the process blowup. Instead they are going to respin at 14nm essentially the same thing they've got now while Rome is sampling right now and should be in production by years end. Intel won't be able to get legs back under them until 2020 at the earliest and AMD is likely to be so far ahead at that point that it'll probably be till 2022 before they can catch up. That will definitely make this worse than P4.
The hope is that this competition will force Intel to cut prices and drive down consumer prices. Let us pray.
Yes, AMD will put four dice onto a large hot MCM, linked by slightly drier string than last time. Still not playing in the same league as Xeon Gold, and much larger manufacturing costs.
Why ARM keeps mum distancing from this ultrahigh margin market of server/supercomputer chips where prices are already almost 2 orders of magnitude from production cost?
Because ARM doesn't manufacture chips - it has the skills to, but not the enormous capital requirements, and if it tried then its chip-manufacturing partners would suddenly stop being its partners.
Intel has an amazing reputation in server chips: a competitor would have to sell their chips at a third the price of Intel's (at least, Cavium and Qualcomm have both placed them at that sort of price point), and Intel is in a good position to drop their price 30% and remove the competitor's profit margin.
Fujitsu is showing off its ARM supercomputer chip at Hot Chips 30 on 21st August.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
51 Comments
Back to Article
WithoutWeakness - Wednesday, August 8, 2018 - link
It took until 2018 to see 10nm in any actual shipping processors from Intel (original date was expected in 2016). Actual mainstream release of 10nm consumer chips has been delayed until "holiday 2019" and 10nm server chips are still on the roadmap for 2020 at the earliest.Intel is giving AMD every opportunity to catch up and I'm glad to see that AMD is taking advantage of it. These sort of delays are allowing AMD to claw back market share and the closer the scale gets to equilibrium the better the competition will be. It is a good time to be a consumer.
Galid - Wednesday, August 8, 2018 - link
I don't know why you would ahve to say that it is a good time to be a consumer, for the prices to go down? From my core 2 duo e5200 that could overclock like crazy for pennies to my current i5 2500k that could overclock like crazy for pennies while AMD wasn't in the competition AT ALL I didn't feel like it was a bad time to be a consumer. Just look at the horizon now, every product that goes out, doesn't cut prices down like almost at all. Unless we're speaking about older ''non competing'' AMD processors pre Ryzen, pretty much every cpu from i5 2500k to i7 7700k hasn'T been going down in price much even tho ryzen brought some competition to them.From now on, companies like intell, Nvidia and AMD communicate between each other to avoid things like radeon 4870 super performance for the price to kill GTX 260 overpriced same performance to happen again. Just look with your own eyes what happened since then. Everything prices pretty much where it will stay for a while. Every product seems to land where it's supposed to be never undercutting prices from current products more than let's say 50$ if so.
wrkingclass_hero - Thursday, August 9, 2018 - link
The communication you are talking about is called collusion and it is illegal. Also, I doubt that Nvidia and AMD are colluding, Nvidia seems to be doing everything in their power to crush AMD, even shady stuff like putting pressure on third parties to align their brands exclusively to them, and limiting Nvidia GPU performance when used with Ryzen CPUs.Fritzkier - Thursday, August 9, 2018 - link
"From my core 2 duo e5200 that could overclock like crazy for pennies to my current i5 2500k that could overclock like crazy for pennies while AMD wasn't in the competition AT ALL I didn't feel like it was a bad time to be a consumer."IT IS a bad time to be a consumer. You said it yourself
"pretty much every cpu from i5 2500k to i7 7700k hasn'T been going down in price"
THAT'S the effect of no competition. After Ryzen, even Coffee Lake i3 almost comparable to Kaby Lake i5.
jospoortvliet - Thursday, August 9, 2018 - link
Yup. Prices have gone down finally- an intel quad-core at 4 ghz is cheaper today by a huge margin compared to 2 years ago... the gap is even bigger for 6 and 8 cores.Spunjji - Thursday, August 9, 2018 - link
I don't recall an i5 ever being worth "pennies" either. You could overclock the Core 2 e5200, not so with anything other than a K-series Intel CPU since then.His entire comment contradicts itself. Oh, the fine art of unpaid shilling. ^_^
mapesdhs - Thursday, August 9, 2018 - link
Plenty of people oc'd the i7 3820, not a K part. Ditto some of the XEONs for X79.close - Saturday, August 11, 2018 - link
@Galid, up to 2006-2007 AMD was still more or less in the play. And the CPUs that were launched shortly after (2008-2010) were still designed by Intel with AMD's pressure in mind. The price also reflected that pressure.The CPUs you are talking about were launched ~10, and 7+ years ago respectively. The 2500K at over $200 so not really pennies. I can draw you a picture of how prices have steadily gone up and performance improvements YoY decreased as Intel was further and further away from those times AMD put pressure on them. You'd still miss the point. And the line...
But it's "great" that companies can communicate to avoid "fiascos" like when one of them launches a great product with great performance and price. We definitely don't want that. Nope. Tragedy averted, customers rejoice.
@mapesdhs there's OC and OC. First off, you need the much more expensive LGA2011 platform to do any BCLK OC. "Regular" LGA1155 got about... 3% OC headroom. Impressive. Second, even so they still never reached the OC levels of the K parts. And third, a very small fraction of Intel's SKUs actually managed this at all so the 3820 really doesn't prove any point.
@everyone else, Galid isn't shilling, it's just painful lack of education.
RU482 - Saturday, August 11, 2018 - link
"pretty much every cpu from i5 2500k to i7 7700k hasn'T been going down in price"lol, you could look back 18 yrs to the Pentium 550e (the hotshot I5 equivalent back in early 2000) ran around the same price new as either of those processors.
nickolas84 - Wednesday, August 8, 2018 - link
Back in the day Intel was on 14nm and Apple was on 20nm. Right now Apple is plannining the release of 7nm chips and Intel is still on 14nm.I do not care how you meassure and if Intels 14nm are smaller than Apple's. The fact on the matter is that in the recent years Apple has shrunk its processes 8x and Intel just stood still. That is amazing in all the wrong ways for Intel, and kudos to Apple for actually moving things forward.
nevcairiel - Wednesday, August 8, 2018 - link
Apple just uses the process technology of other companies, they didn't develop it. If anyone, you should be praising TSMC for steady progress in the process-department, not Apple.(The same can be translated for AMD, and anyone else really, since Intel is a rare occurance with both designing and making the chips themselves.)
nevcairiel - Wednesday, August 8, 2018 - link
Also, it absolutely does matter "how you measure" it. Because the smaller you get, the harder it is to make advances. If someone else is behind the curve, they'll have an easier time catching up then the frontrunners making new developments.This is not meant to excuse the delays Intel faced, I just don't know what went wrong there, if they were too ambitious with their goals and it backfired or whatever.
rahvin - Wednesday, August 8, 2018 - link
Transistor sizes are essentially frozen at 16nm, FinFET and all the other technologies to allow die shrinks just dope the transistors in a way that opens them back up to 16nm. Based on this I'd say it's pretty likely that even if we can continue to shrink the trace sizes without quantum effects blowing up, the transistor itself just can't operate below 16nm because the electrons keep tunneling out.I'd also like to point out that Intel's 10nm chips "shipping" are a joke, it was a limited run of like 5k chips that barely work, generate more heat and less computing power than even the lowest grade processor from the previous generation. Intel's 10nm was a marking gimmick and not in the least real. AMD has not only caught up to Intel at this point, by 2020 they should be at least one process node ahead of Intel and the performance leader. Intel's response has been to raise prices again to compensate for the revenue AMD is going to take from them.
Frankly it's astounding how bad it is for Intel right now. It's also astounding how few heads have rolled at Intel for the 10nm debacle.
silverblue - Thursday, August 9, 2018 - link
Assuming people believe that GF's 7nm process is any more advanced than Intel's 10nm process. Regardless of name, don't let that sway you into thinking it's actually going to be superior, at least not until mass production starts on both nodes. 7nm might end up being better, it might not clock as high, no idea. Historically, Intel's process has been better at the same node, and it's only now that GF are really competitive (after years of shafting AMD).There's a table on this link which is a rather brief comparison between the competing processes:
https://www.semiwiki.com/forum/content/7602-semico...
sgeocla - Thursday, August 9, 2018 - link
Intel initially revealed that its own 10nm process was "within" 17% of the best 7nm process of its competitors. Within means they are not ahead but behind. AMD has access to 7nm production from both GF and TSMC so it's likely they have their pick from at least 4 variants of 7nm, one for HP (high frequency) and one for LP from each fab. They are going to use the best variant for the best chip use case.And according to Semiaccurate who claims access to internal documents from Intel (and stated for some time that 10nm Ice Lake was postponed again for late 2020), the next 10nm chips that are going to be released are going to be more like "12nm" (14nm++ with some 10nm features) because the initial 10nm process was never going to yield big enough dies for desktops or servers
LurkingSince97 - Friday, August 10, 2018 - link
Intel's 10nm is about the same size as the others' 7nm. Its performance (power and/or frequency) may be higher, as is typical of Intel's processes as they often build in some extra features that are complicated and hard to design for but improve performance.The foundries can't really do this, as this would make their product harder to use and more expensive.
The reason why the numbers are off is that when the foundries went from 20nm to 16/14 they _DID NOT SHRINK_ they just added fin-fets (mostly, there were other minor changes). So now they were "14nm" but really barely smaller than intel's 22nm. Intel's 14nm was quite a bit smaller than those. For a variety of reasons, the measurements between foundries and various fabs can no longer really be described well with one number like '7nm'.
Really, just look at how much die space 256K of cache takes up, and how much space some logic takes up and compare that. What is important to compare is density. Performance / Power are also very important but aren't things that the 'nm' number can indicate anymore. Remember when TSMC went from 28nm planar to 20nm planar? Remember how Nvidia and AMD did not bother to use it because it only provided density increases and no performance increase? Yeah, the 'nm' of a process is now the least important thing.
wumpus - Thursday, August 9, 2018 - link
"Historically, Intel's process has been better at the same node, and it's only now that GF are really competitive (after years of shafting AMD)."I'm not so sure of that, but largely because Intel has been a node ahead of *everybody*, not just GF/AMD. Also AMD largely screwed itself with the Phenom, Bulldozer, Vega and other lousy designs.
IBM traditionally has been able to produce at nodes similar to Intel, but only for extremely low volume (and they were presumably indifferent to yields, as these go into ultra-high margin machines).
Having TSMC (and hopefully GF) catch up to Intel while AMD has the design side of things working again (rumors that Vega was sacrificed to the Sony-financed Navi imply that Navi might be pretty good, and Zen is amazingly strong) might bring back the strongest competition since Intel tried to use Itanium and Pentium4 to compete with Athlon64 (no process advantage in the world could fix that).
Wilco1 - Thursday, August 9, 2018 - link
Not sure what you mean with "Transistor sizes are essentially frozen at 16nm" - transistors are scaling extremely well, much better than metal. For example Samsung's 7nm uses a fin pitch of 27nm which is almost perfect 2x scaling from their 14nm. The fin widths are already much smaller than 16nm. The foundries are very confident about 5nm and even 3nm, so we're not near the end of scaling.Kvaern1 - Wednesday, August 8, 2018 - link
Imagine if, everything else being equal, Intel had said yes we'll fab to Apple all those years ago.BillBear - Wednesday, August 8, 2018 - link
Passing up Apple's business is widely thought to be Intel's biggest mistake under Otellini.They left a hell of a lot of revenue on the table for Samsung and TSMC who reinvested their profits into quickly improving their process technology.
name99 - Wednesday, August 8, 2018 - link
TSMC obviously deserve massive credit, and you'd be a fool to deny that.BUT their job HAS been made a lot easier by having a large customer with deep pockets, a stable schedule, and a willingness to pay for the leading edge.
We know that much of Foxconn's growth has been through innovative combined Apple+Foxconn financing of new equipment, and it is likely that similar arrangements have occurred with TSMC.
So TSMC gets the tech credit, but Apple probably deserves some financial credit.
edzieba - Wednesday, August 8, 2018 - link
They also still need to actually ship silicon. TSMC are also using the same SAQP process Intel have been having trouble with.Wilco1 - Thursday, August 9, 2018 - link
TSMC 7nm is already in volume production with consumer devices using 7nm SoCs expected in Q4.SSNSeawolf - Wednesday, August 8, 2018 - link
Apple isn't a fab.GreenReaper - Wednesday, August 15, 2018 - link
They're just *fab*ulous!diehardmacfan - Wednesday, August 8, 2018 - link
Apple (like AMD and Nvidia) is a fabless company, the improved fabrication for CPU's and SOC's are mostly coming from TSMC, Global Foundries and Samsung.novastar78 - Friday, August 10, 2018 - link
Global Foundries is actually AMD's fab....goatfajitas - Wednesday, August 8, 2018 - link
Apple doesn't make chips. TSMC makes their chips. Apple takes the standard ARM design and tweaks it.quadrivial - Wednesday, August 8, 2018 - link
Intel's 14nm is much smaller than everyone else's 14nm. Is this similarly true for 10nm vs 7nm? If they are equivalent (my suspicion), then Intel is still more than a year behind (very significant), but aren't 2 nodes behind like the label suggests. Is there any reliable source to compare?https://en.wikichip.org/wiki/14_nm_lithography_pro...
https://en.wikichip.org/wiki/10_nm_lithography_pro...
https://en.wikichip.org/wiki/7_nm_lithography_proc...
Wilco1 - Wednesday, August 8, 2018 - link
Intel's 14nm has higher theoretical density than other 14nm processes indeed, however Intel themselves admitted that TSMC 20nm chips have better density than Intel 14nm (slide 18): http://intelstudios.edgesuite.net/im/2015/pdf/2015...The theoretical densities of Intel 10nm and TSMC/GF/SS 7nm are very close. However the 10nm that will ship will likely be a ++ process with significantly relaxed pitches to fix the yield and performance issues. So they won't get anywhere near the claimed 100 million transistors/mm^2. Also remember when 10nm is released, the foundries will be on their 7nm+ processes which improve density further.
So Intel will remain significantly behind on both theoretical and actual density for the foreseeable future.
Cooe - Wednesday, August 8, 2018 - link
This. The 10nm shipping to consumers next year is NOT the 10nm Intel had originally planned & spec'ed out. The new "node" is more like 12nm (using Intel's own measurement definitions). Either way, it's ALL bad news.iwod - Thursday, August 9, 2018 - link
This is going to be an unpopular opinion on anandtech.The 7nm is better than Intel 14nm+++, but it is not miles ahead as you would imagine by the numbers. The Intel 10nm on paper and initial chip are actually much better than TSMC 7nm. But of coz that is an apple to orange comparison because Intel 10nm isn't even ready for HVM.
The Apple 7nm are also custom and made to yield, so it is not exactly the same as normal TSMC 7nm everyone else are getting.
So there is no denying that Apple and TSMC for the first time ever will likely to have better transistor than Intel, but it is still not 14 vs 7 difference. One shouldn't be all hyped up and ignoring all the technical and business details behind it.
Wilco1 - Thursday, August 9, 2018 - link
I hate to burst your bubble but Intel hasn't had the best transistors for some time now. Centriq on 10nm was shown to be both faster and far more power efficient than Skylake. 7nm is only going to widen the gap further...Carl Bicknell - Wednesday, August 8, 2018 - link
Has there been any conformation of the number of cores in the Xeon SP, for Cascade Lake, Cooper Lake and Ice Lake respectively?I read somewhere Cascade Lake is due to get 28 cores which is no improvement at all. I find it surprising they wouldn't try to add a few more.
siberian3 - Wednesday, August 8, 2018 - link
With these news they will compete with Epic 3 which is the next after Rome sad news for intel.I think they are in the worst situation i can remember.The only hope is Jim Keler pull another rabbit from his hat but thats for 2021 or laterElstar - Wednesday, August 8, 2018 - link
Anybody know what the 'b' in bfloat16 stands for? Normally, I'd guess "binary", but IEEE floating point already has naming conventions. For binary floating point, we have "half", "single", "double", and "quad" for 16-, 32-, 64-, and 128-bit binary floating point respectively. For decimal floating point, we have "decimal32", "decimal64", etc.Ian Cutress - Wednesday, August 8, 2018 - link
b is for 'brain' I believe. It's related to a different exponent/mantissa config vs a standard 16-bit value, optimized for machine learningboeush - Wednesday, August 8, 2018 - link
"...Cascade Lake server platform, which will feature CPUs that bring support for hardware security mitigations against side-channel attacks through partitioning."I'd love to see some more details regarding this...
Frenetic Pony - Wednesday, August 8, 2018 - link
Meanwhile ex CEO Brian enjoys his quick, sneaky retirement with all benefits still included.abufrejoval - Wednesday, August 8, 2018 - link
Sounds like this here: https://software.intel.com/sites/default/files/man...And there is a critique here: https://lwn.net/Articles/758284/
ARM goes for address tagging: https://www.qualcomm.com/media/documents/files/whi...
wumpus - Wednesday, August 8, 2018 - link
Note: 32 bit floating point can't be called "high precision".Ok, it might be "high precision" when the numbers go in and you might be fooled into thinking they are when they come out, but any kind of heavy math simulation will need 64 bit numbers. 32 bit numbers might also be overkill for graphics, but they use them anyway.
- If you don't believe me, try doing a 32k point FFT on some audio (which is only 16 bit) and back in single point. The audio won't have 10 bits of accuracy when you are done, and each point was only involved in ~15 operations.
hpvd - Thursday, August 9, 2018 - link
Since LGA4189 is coming relatively soon, what do you think: will there be DDR5 support or do we have to wait for the next platform in 2021+ ?wow&wow - Thursday, August 9, 2018 - link
At least, starting from Cascade Lake, there won't be OS kernel relocation, such a JOKE in the processor industry!!!iwod - Thursday, August 9, 2018 - link
New socket for CooperLake? And 8 memory channel? Now that is some competition. I assume Intel may want to bump the core count o 32 to combat AMD.Which makes me slightly worry for AMD, if the price between EPYC 2 and CooperLake is within 20% I don't think many will be choosing EPYC.
NikosD - Friday, August 10, 2018 - link
EPYC 2 will start at 48C/96T and later 64C/128T.Nothing to compete here for Intel.
AMD will exceed 20% server market share at the end of 2019.
rahvin - Friday, August 10, 2018 - link
Lets not forget Intel is raising prices again to keep their revenue the same while AMD takes marketshare. The top end Xeon Platinum is going from $13K to $20K where the AMD Rome that's comparable (Rome will be faster) will be 40% of that price.Intel has some long dark days ahead of it that will be comparable to what they experienced with Pentium 4. They emptied their bench to stay competitive with Epyc and Threadripper and they've got nothing going for the future because of the process blowup. Instead they are going to respin at 14nm essentially the same thing they've got now while Rome is sampling right now and should be in production by years end. Intel won't be able to get legs back under them until 2020 at the earliest and AMD is likely to be so far ahead at that point that it'll probably be till 2022 before they can catch up. That will definitely make this worse than P4.
The hope is that this competition will force Intel to cut prices and drive down consumer prices. Let us pray.
TomWomack - Saturday, August 11, 2018 - link
Yes, AMD will put four dice onto a large hot MCM, linked by slightly drier string than last time. Still not playing in the same league as Xeon Gold, and much larger manufacturing costs.Dr. Swag - Wednesday, August 15, 2018 - link
Ah yes, it costs more to put together a few smaller dies than using one larger one. Because that's how yields work /sSanX - Thursday, August 9, 2018 - link
Why ARM keeps mum distancing from this ultrahigh margin market of server/supercomputer chips where prices are already almost 2 orders of magnitude from production cost?TomWomack - Saturday, August 11, 2018 - link
Because ARM doesn't manufacture chips - it has the skills to, but not the enormous capital requirements, and if it tried then its chip-manufacturing partners would suddenly stop being its partners.Intel has an amazing reputation in server chips: a competitor would have to sell their chips at a third the price of Intel's (at least, Cavium and Qualcomm have both placed them at that sort of price point), and Intel is in a good position to drop their price 30% and remove the competitor's profit margin.
Fujitsu is showing off its ARM supercomputer chip at Hot Chips 30 on 21st August.
jamesjs - Thursday, August 16, 2018 - link
Great news! It is always feeling good to work on Intel processor. I have a Lenovo laptop having intel core i5.