While we've known about the existence of the Exynos 7420 for a while now, we didn't really know what to expect until recently. Today, it seems that Samsung is ready to start disclosing at least a few details about an upcoming Exynos 7 SoC, which is likely to be the Exynos 7420.

At a high level Exynos 7 will have four Cortex A57s clocked at 2.1 GHz, in addition to four Cortex A53s along with an LPDDR4-capable memory interface. According to Samsung Tomorrow, we can expect a 20% increase to device performance, which is likely a reference to clock speed, and 35% lower power consumption. In addition, there is a reference to a 30% productivity gain, which is likely to be referencing performance per watt. Samsung claims that these figures come from a comparison to their 20nm HKMG process, which we've examined before with the Exynos 5433 in the Note 4 Exynos review.

Although there is no direct statement of which version of 14nm is used for this upcoming Exynos 7 Octa, judging by how this is the first 14nm IC to come from Samsung it's likely that this SoC will use 14LPE, which focuses on reducing leakage and power consumption rather than switching speed.

Source: Samsung Tomorrow

Comments Locked

111 Comments

View All Comments

  • ddriver - Tuesday, February 17, 2015 - link

    It is only logical that other manufacturers will catch up, chip process is not something that can scale indefinitely, soon it will hit a brick wall and will require an entirely different technology to continue scaling things down.

    And of all the potential replacements none I can think of is really applicable on a a large scale, so chances are chip process will only get to a point and stay there, you know, like it happened with rocket engines in the end of the 60s - barely anything has been improved upon since then, nor any more promising technology has emerged in those 40+ years.

    My hope for the future is printed electronics becoming cheap and efficient - naturally, they will never be as efficient as current etched chips, but they will be chip to make custom designs in small quantities, and dedicated designs vastly exceed the performance and efficiency of general purpose processors running the workload as a program. And you don't need a factory to make it - all you need is a small printer, it will be like FPGAs but much cheaper efficient, and I assume also very environmental friendly - barely any environmental impact and 99% recyclability.
  • Guest8 - Tuesday, February 17, 2015 - link

    It is logical but improbable. Easiest example Intel buys their tools from the same companies as TSM, Samsung etc and yet they have been and still are ahead in process technology. Logically speaking every one should be on the same process node but they are not.

    "The foundry industry is spending about $25 billion per year to supply Apple and QCOM with about a total of $12 billion per year in product ($7 billion to Apple and $5 billion to QCOM) That can't work. ..... Then to make matters worse, Apple bounces their business, (or plans to) from TSMC to Samsung and back again. How can that work?" - Russ Fisher

    This will be the last stand for TSM and Samsung to come close to Intel as there just isn't enough business to sustain the R&D required no other mfg has a ridiculous amount of cash coming in like Intel does to finance their R&D to continue their node shrinks. It will get even worse when Intel finally produces a competitive SoC at 14nm and below as that will further displace the available money the other foundries would get. It will take several years to play out but this is what will happen.
  • ddriver - Tuesday, February 17, 2015 - link

    Intel makes more money and can afford to burn more money on tooling and process improvements.

    Kids get their clothes from the same store, but kids with rich parents by clothes more often and stay trendy, kids with poorer parents buy new clothes less often, sometimes have to skip a fad or two...

    Traditional chip manufacturing will get stranded in the years to come, I mean in less than 10 years. There are ways to scale down, but none of them are commercially feasible and will likely never be. Chips will get stuck in process, and that's normal - a bell shaped curve has one single peak. And besides normal - it is good enough. So it is not a bad thing that in 20 years we will not have supercomputers in our pockets, because such are not needed, even at this point enough is enough. The industry might be able to scale another 3-4 steps down, but that will be it. The rest is empty promises much like people were supposed to have hover cars and deep space travel by the 2000...
  • mkozakewich - Tuesday, February 17, 2015 - link

    We *do* have supercomputers in our pockets.

    And no matter how powerful CPUs become (even a few more orders of magnitude) people like you will never refer to the ones in our pockets as supercomputers because they won't stand up to the supercomputers of the times.
  • ddriver - Wednesday, February 18, 2015 - link

    No you don't. Just because a cellphone today has more horsepower than a "supercomputer" from back in the days they were made of tubes. It does without saying that when I say "supercomputer" today, I mean one by contemporary standards, not something abstract and relative.

    Just like when you say "tomorrow" you mean the day after today, not that "always the next day" which will never arrive.

    To put it in an absolute metric - by a "supercomputer" I mean a petaflop scale device. It will not happen in 10, nor in 100 nor in a million years.

    As for the fact we use devices which are supercomputers relative to stuff that was used in the past for achieving great tasks and reaching important milestones, and we use that for making duckface "selfies" and waste our lives playing stupid games - that just goes to show better technology doesn't really result in better people, and can often be on the contrary...
  • Guest8 - Wednesday, February 18, 2015 - link

    @ddriver the problem with your analogy is in the APU mfg business you can't "skip" a node or two and come back roaring as you will lose the sales that come with bleeding and leading edge processes. When you lose the sales you lose R&D funds and without funds you can't exactly go into the APU thrift store and pick up new methods. If Apple doesn't come back to TSM how will TSM justify the expense of going down to 10nm and below? They don't have the cash, they don't have the sales and they don't have the technical knowledge acquired by producing the previous nodes. It will be down to Intel and maybe samsung depending on how many employees samsung can poach from Intel. That is the only reason samsung can stay competitive with Intel as they just poached TSM employees on the cheap. You can already see it now everyone is moving to Samsung's 14nm process which chip vendor has announced TSM 16nm FF+? Qualcomm? Nope. Apple? Nope. That's the bulk if not all of your volume at the bleeding edge. Qualcomm may come back to TSM but why would they after the problems they have at 20nm and losing Samsung APU business? There is way more than just can it be done in the business economics play a huge role. The only business TSM will have in a few years is going to be the low margin legacy node business they won't be able to command a premium at the bleeding edge as they will no longer produce at the bleeding edge. 16nm FF+ is their last stand against Samsung and Intel.
  • jimjamjamie - Monday, February 23, 2015 - link

    I feel like software still has a lot of catching up to do compared to hardware at this point anyway - see the current state of the GPU industry for an example. All those billions of transistors wasted until the likes of Mantle/DX12 and multi-threaded drivers have shown that there is a huge amount of performance tweaking that can be done at the low level. Obviously, this can only go so far. But from what I can see, we're a long way off fully exploiting the existing hardware available.
  • Kristian Vättö - Tuesday, February 17, 2015 - link

    The lithography refers to the smallest pitch in the silicon and always has. The actual density doesn't necessarily correlate with that, so Intel's 14nm can't be compared with others' 14nm until the first die shots are out.
  • PC Perv - Monday, February 16, 2015 - link

    "No True Scotsman," I see
  • Yojimbo - Monday, February 16, 2015 - link

    I read an article showing that by at least one metric, the Intel 14nm process is much more dense than either the Samsung 14nm or the TSMC 16nm, and previously, at times when Intel was far ahead in the nodes, the opposite was true. So perhaps Intel's lead is pretty much the same as it was, less than it seemingly was before and more than it seemingly is now.

Log in

Don't have an account? Sign up now