While we've known about the existence of the Exynos 7420 for a while now, we didn't really know what to expect until recently. Today, it seems that Samsung is ready to start disclosing at least a few details about an upcoming Exynos 7 SoC, which is likely to be the Exynos 7420.

At a high level Exynos 7 will have four Cortex A57s clocked at 2.1 GHz, in addition to four Cortex A53s along with an LPDDR4-capable memory interface. According to Samsung Tomorrow, we can expect a 20% increase to device performance, which is likely a reference to clock speed, and 35% lower power consumption. In addition, there is a reference to a 30% productivity gain, which is likely to be referencing performance per watt. Samsung claims that these figures come from a comparison to their 20nm HKMG process, which we've examined before with the Exynos 5433 in the Note 4 Exynos review.

Although there is no direct statement of which version of 14nm is used for this upcoming Exynos 7 Octa, judging by how this is the first 14nm IC to come from Samsung it's likely that this SoC will use 14LPE, which focuses on reducing leakage and power consumption rather than switching speed.

Source: Samsung Tomorrow

Comments Locked

111 Comments

View All Comments

  • Kvaern2 - Monday, February 16, 2015 - link

    Should, would could.
    Over the years I've heard too many overly optimistic node shrink proclamations from competing foundries which turned out to be vapor or got hit by huge delays so I'll believe there's parity when I see it in a shipping device.
  • anactoraaron - Monday, February 16, 2015 - link

    Why do folks here keep saying that there's no 14nm atom yet? It's called Cherry Trail, and started shipping to OEMS at the beginning of the year... http://www.anandtech.com/show/8831/intel-shipping-...

    This means that there will be cherry trail devices soon. So enough with that noise
  • Speedfriend - Monday, February 16, 2015 - link

    "And another thing. Intel could barely compete with ARM when it had half a node + FinFET ahead of ARM (22nm FinFET vs 28nm)."

    Geekbench
    Atom Z3795 multi core 3166.
    Galaxy Note with Snapdrgon 805 multi core 2975
    iPhone 6 multi core 2885

    You are right, it could barely compete....
  • Krysto - Monday, February 16, 2015 - link

    Now show me the GPU numbers. I was talking about the SoCs in general. Atom was usually at least a generation behind in GPU performance.
  • patrickjp93 - Tuesday, February 17, 2015 - link

    Who cares about the GPU score? Do you intend to do intense gaming on your phone now?! If the GPU does what it needs to functionally without causing you lag, it's fine. Intel and PowerVR know this all too well.
  • PC Perv - Monday, February 16, 2015 - link

    Where the hell is Atom Z3795? Never heard of it. Sounds like overclocked Z3750/Z3770 for the sake of.. you-know-what.
  • IntelUser2000 - Tuesday, February 17, 2015 - link

    Great, compare chips that go into phone that doesn't need to be heavily subsidized versus one that needs to be subsidized and can't go into a phone. Fair comparison. And the lead for Atom even accounting for that is only 9% at the best. That is BARELY competing.

    ARM chips on the "behind" process were being neck-and-neck with Atom chips, at least when 20nm ARM chips came to completely kick Atom chips to the curb, Atom was on 22nm for quite a while. Now 14nm Atoms are even later than 14nm ARM chips.

    Cherry Trail I remember to be mere 2x in GPU and 5-10% in CPU compared to Bay Trail. It will still lose massively.
  • patrickjp93 - Tuesday, February 17, 2015 - link

    No, Intel's are the only 14nm chips out there and shipping in devices. Jeez you people hate the only company leading the pack...
  • AnakinG - Monday, February 16, 2015 - link

    Can you stop posting if you are wrong about the prediction of Samsung rolling out 7420 at 14nm node in March or April? I'm wondering how confident you are in your claims.
  • patrickjp93 - Tuesday, February 17, 2015 - link

    You do realize Intel had to go from designing for performance to designing for perf/watt, right? That's a massive paradigm shift and it took a while. Now ARM is hitting a brick wall in performance whereas Intel is dropping in power while maintaining the same performance or gaining just a little. ARM has to go a bit more CISC every generation, including out of order superscalar processing (very CISC), to keep up the performance, but, as it turns out, that performance circuitry is electrically and thermally expensive, and ARM's branch predictors are barely more than 40% accurate vs. Intel's 93% accurate BP.

Log in

Don't have an account? Sign up now