The Cortex-X1 Micro-architecture: Bigger, Fatter, More Performance

While the Cortex-A78 seems relatively tame in its performance goals, today’s biggest announcement is the far more aggressive Cortex-X1. As already noted, Cortex-X1 is a significant departure from Arm's usual "balanced" design philosophy, with Arm designing a core that favors absolute performance, even if it comes at the cost of energy efficiency and space efficiency.

At a high level, the design could be summed up as being a ultra-charged A78 – maintaining the same functional principles, but increasing the structures of the core significantly in order to maximize performance.

Compared to an A78, it’s a wider core, going up from a 4- to a 5-wide decoder, increasing the renaming bandwidth to up to 8 Mops/cycle, and also vastly changing up some of the pipelines and caches, doubling up on the NEON unit, and double the L2 and L3 caches.

On the front-end (and valid the rest of the core as well), the Cortex-X1 adopts all the improvements that we’ve already covered on the Cortex-A78, including the new branch units. On top of the changes the A78 introduced, the X1 further grows some aspects of the blocks here. The L0 BTB has been upgraded from 64 entries on the Cortex-A77 and A78, to up to 96 entries on the X1, allowing for more zero latency taken branches. The branch target buffers are still of a two-tier hierarchy with the L0 and L2 BTBs, which Arm in previous disclosures referred to as the nanoBTB and mainBTB. The microBTB/L1 BTB was present in the A76 but had been subsequently discontinued.

The macro-op cache has been outright doubled from 1.5K entries to 3K entries, making this a big structure amongst the publicly disclosed microarchitectures out there, bigger than even Sunny Cove’s 2.25K entries, but shy of Zen2’s 4K entry structure - although we do have to make the disambiguation that Arm talks about macro-ops while Intel and AMD talk about micro-op caches.

The fetch bandwidth out of the L1I has been bumped up 25% from 4 to 5 instructions with a corresponding increase in the decoder bandwidth, and the fetch and rename bandwidth out of the Mop-cache has seen a 33% increase from 6 to 8 instructions per cycle. In effect, the core can act as a 8-wide machine as long as it’s hitting the Mop cache.

On the mid-core, Arm here again talks about increasing the dispatch bandwidth in terms of Mops or instructions per cycle, increasing it by 33% from 6 to 8 when comparing the X1 to the A78. In µops terms the core can handle up to 16 dispatches per cycle when cracking Mops fully into smaller µops, in that regard, representing a 60% increase compared to the 10µops/cycle the A77 was able to achieve.

The out-of-order window size has been increased from 160 to 224 entries, increasing the ability for the core to extract ILP. This had always been an aspect Arm had been hesitant to upgrade as they had mentioned that performance doesn’t scale nearly as linearly with the increased structure size, and it comes at a cost of power and area. The X1 here is able to make those compromises given that it doesn’t have to target an as wide range of vendor implementations.

On the execution side, we don’t see any changes on the part of the integer pipelines compared to the A78, however the floating point and NEON pipelines more significantly diverge from past microarchitectures, thanks to the doubling of the pipelines. Doubling here can actually be taken in the literal sense, as the two existing pipelines of the A77 and A78 are essentially copy-pasted again, and the two pairs of units are identical in their capabilities. That’s a quite huge improvement and increase in execution resources.

In effect, the Cortex-X1 is now a 4x128b SIMD machine, pretty much equal in vector execution width as some desktop cores such as Intel’s Sunny Cove or AMD’s Zen2. Though unlike those designs, Arm's current ISA doesn't allow for individual vectors to be larger than 128b, which is something to be addressed in a next generation core.

On the memory subsystem side, the Cortex-X1 also sees some significant changes – although the AGU setup is the same as that found on the Cortex-A78.

On the part of the L1D and L2 caches, Arm has created new designs that differ in their access bandwidth. The interfaces to the caches here aren’t wider, but rather what’s changed is the caches designs themselves, now implementing double the memory banks. What this solves is possible bank conflicts when doing multiple concurrent accesses to the caches, it’s something that we may have observed with odd “zig-zag” patterns in our memory tests of the Cortex-A76 cores a few years back, and still present in some variations of that µarch.

The L1I and L1D caches on the X1 are meant to be configured at 64KB. On the L2, because it’s a brand new design, Arm also took the opportunity to increase the maximum size of the cache which now doubles up to 1MB. Again, this actually isn’t the same 1MB L2 cache design that we first saw on the Neoverse-N1, but a new implementation. The access latency is 1 cycle better than the 11-cyle variant of the N1, achieving 10 cycles on the X1, regardless of the size of the cache.

The memory subsystem also increases the capability to support more loads and stores, increasing the window here by 33%, adding even more onto the MLP ability of the core. We have to note that this increase not merely refers to the store and load buffers but the whole system’s capabilities with tracking and servicing requests.

Finally, the L2 TLB has also seen a doubling in size compared to the A78 (66% increase vs A77) with 2K entries coverage, serving up to 8MB of memory at 4K pages, which makes for a good fit for the envisioned 8MB L3 cache for target X1 implementations.

The doubling of the L3 cache in the DSU doesn’t necessarily mean that it’s going to be a slower implementation, as the latency can be the same, but depending on partner implementations it can mean a few extra cycles of latency. Likely what this is referring to is likely the option for banking the L3 with separated power management. To date, I haven’t heard of any vendors using this feature of the DSU as most implementers such as Qualcomm  have always had the 4MB L3 fully powered on all the time. It is possible that with a 8MB DSU that some vendors might look into power managing this better, for example it having being only partially powered on as long as only little cores are active.

Overall, what’s clear here about the Cortex-X1 microarchitecture is that it’s largely consisting of the same fundamental building blocks as that of the Cortex-A78, but only having bigger and more of the structures. It’s particularly with the front-end and the mid-core where the X1 really supersizes things compared to the A78, being a much wider microarchitecture at heart. The arguments about the low return on investment on some structures here just don’t apply on the X1, and Arm went for the biggest configurations that were feasible and reasonable, even if that grows the size of the core and increases power consumption.

I think the real only design constraints the company set themselves here is in terms of the frequency capabilities of the X1. It’s still a very short pipeline design with a 10-cycle branch mispredict penalty and a 13-stage deep frequency design, and this remains the same between the A78 and X1, with the latter’s bigger structures and wider design not handicapping the peak frequencies of the core.

The Cortex-A78 Micro-architecture: PPA Focused Performance & Power Projections: Best of Both Worlds
POST A COMMENT

192 Comments

View All Comments

  • tkSteveFOX - Wednesday, May 27, 2020 - link

    Would be great if we get a 1 x X1 + 3xA78 and 4xA55 with 4MB L3 shared between the big cores.
    Or just 2 x X1 and 6xA55 cores with 8MB L3 cache for the X1 cores (would be interesting to see the efficiency here compared to the above).
    5nm gives a lot of headroom and even using 1x3GHz A77 and 3x2.7 GHz A77 is possible under this node.
    Reply
  • ReverendDC - Wednesday, May 27, 2020 - link

    I'm excited to see what comes of this for Windows on ARM. I know that's are some that will find it pointless, but there are millions of office workers and IT pros that support them that would find an all-day, cheaply replaceable, Office chewing, LTE/5G always connected device to be quite useful...

    For years Intel has tried to make an all-day system, and finally straight gave up! Yes, Windows is "heavier" on system calls, but then again, Linux can be as well. Seems to have shoehorned in nicely after 4+ years of trial and error (and Law and Order, but...) with Android. While I wouldn't buy a Surface Pro X, it does do 80% of what to expect from a full day Win10 x86 system. That's progress. Let's see if this makes more!
    Reply
  • serendip - Wednesday, May 27, 2020 - link

    The X1 belongs in a flagship ARM Windows device like the next Surface Pro X. The current model has a Qualcomm SQ1 and it already performs at 8th gen Core i5 levels, with half the power consumption when running ARM code. An X1-based SoC could offer top tier i7 performance at half the power and hopefully a lower price. Competition is good to keep Intel honest. Reply
  • ballsystemlord - Thursday, May 28, 2020 - link

    @Andrei You have a technical error:
    "...all while reducing power by 4% and reducing area by 4%"
    In the picture area reduction is 5, not 4 percent.
    "...all while reducing power by 4% and reducing area by 5%"
    Reply
  • anonomouse - Saturday, May 30, 2020 - link

    So with two tiers of big cores now, and presumably a new small core and supposedly a new middle-ish core to span the ever-increasing gap between big and little... does this mean that in a couple of years Android phones will have to deal with scheduling across 4 different types of cores? bigger.big.middle.little? Reply
  • fozia - Saturday, June 6, 2020 - link

    I agree. But it's not an achievement to be slower than a 1-year old chip This creates the problem that you cannot hyper-focus on any one area of the PPA triangle without making compromises in the other two. Reply
  • vladpetric - Friday, June 26, 2020 - link

    Peak performance is not performance.

    "Peak" is really just a value you're guaranteed to never exceed ...
    Reply
  • mi1400 - Tuesday, October 6, 2020 - link

    https://images.anandtech.com/doci/15813/A78-X1-cro...
    Why the yellow and orrange starting points/dots have drift in them. The Spec Performance axiz doesnt mandate them to let one start ahead of other. And if this mandate is applied/removed conjoining both stating points the difference of performance will be so similar that both lines will seem overlapping... infact curves between 2nd and 3rd dots of A77/A78 will make A78 even slower. Curves between 3rd and 4th dots of A77/A78 will give A78 some benefit but again curve between 4th and and 5th dots will make A77 = A78.
    What do u say!?! Thanks!
    Reply
  • ChrisGX - Monday, October 12, 2020 - link

    A lot of people are saying that with Cortex-X1 ARM is bringing the fight to Apple’s powerhouse CPUs, i.e. the potent custom ARM processors that Apple develops for consumer computing products.

    Actually, that isn't exactly what is happening. I had a close look at the performance data (using ARM's own projections) and it looks like it will take until the Makalu generation before a successor to the X1 (very nearly) catches up to the A14 on outright (integer) performance. For some time, Apple has had a 2.5 year lead in the performance stakes over ARM and no change is on the cards in that regard. Cortex X1, contrary to ARM's public remarks, continues the existing strategy of winning on energy efficiency not seeking performance gains at any cost. As a matter of fact, the energy efficiency of the X1 isn't too bad as a starting point. And, when modestly clocked A78 cores are also in the mix energy efficiency improves greatly. With the next generation of SoCs based on A78 and X1 licensed ARM cores manufacturers will have the opportunity to either sharply reduce power consumption or add new and advanced processing capabilities without raising power budgets. And, that can be achieved while offering a good (single threaded) performance boost of 33% (or more) over existing A77 based processors.

    When its comes to outright execution speed it seems that ARM is pushing harder on floating point performance than other areas. In that area ARM could conceivably reach performance parity with Apple's SoCs sooner rather than later.
    Reply
  • Salman Ahmed - Tuesday, April 6, 2021 - link

    Can Cortex A75 and Cortex A76 be pared together? Reply

Log in

Don't have an account? Sign up now