Intel Disabled AVX-512, but Not Really

One of the more interesting disclosures about Alder Lake earlier this year is that the processor would not have Intel’s latest 512-bit vector extensions, AVX-512, despite the company making a big song and dance about how it was working with software developers to optimize for it, why it was in their laptop chips, and how no transistor should be left behind. One of the issues was that the processor, inside the silicon, actually did have the AVX-512 unit there. We were told as part of the extra Architecture Day Q&A that it would be fused off, and the plan was for all Alder Lake CPUs to have it fused off.

Part of the issue of AVX-512 support on Alder Lake was that only the P-cores have the feature in the design, and the E-cores do not. One of the downsides of most operating system design is that when a new program starts, there’s no way to accurately determine which core it will be placed on, or if the code will take a path that includes AVX-512. So if, naively, AVX-512 code was run on a processor that did not understand it, like an E-core, it would cause a critical error, which could cause the system to crash. Experts in the area have pointed out that technically the chip could be designed to catch the error and hand off the thread to the right core, but Intel hasn’t done this here as it adds complexity. By disabling AVX-512 in Alder Lake, it means that both the P-cores and the E-cores have a unified common instruction set, and they can both run all software supported on either.

There was a thought that if Intel were to release a version of Alder Lake with P-cores only, or if a system had all the E-cores disabled, there might be an option to have AVX-512. Intel shot down that concept almost immediately, saying very succinctly that no Alder Lake CPU would support AVX-512.

Nonetheless, we test to see if it is actually fused off.

On my first system, the MSI motherboard, I could easily disable the E-cores. That was no problem, just adjust the BIOS to zero E-cores. However this wasn’t sufficient, as AVX-512 was still clearly not detected.

On a second system, an ASUS motherboard, there was some funny option in the BIOS.

Well I’ll be a monkey’s uncle. There’s an option, right there, front and centre for AVX-512. So we disable the E-cores and enable this option. We have AVX-512 support.

For those that have some insight into AVX-512 might be aware that there are a couple of dozen different versions/add-ons of AVX-512. We confirmed that the P-cores in Alder Lake have:

  • AVX512-F / F_X64
  • AVX512-DQ / DQ_X64
  • AVX512-CD
  • AVX512-BW / BW_X64
  • AVX512-VL / VLBW / VLDQ / VL_IFMA / VL_VBMI / VL_VNNI
  • AVX512_VNNI
  • AVX512_VBMI / VBMI2
  • AVX512_IFMA
  • AVX512_BITALG
  • AVX512_VAES
  • AVX512_VPCLMULQDQ
  • AVX512_GFNI
  • AVX512_BF16
  • AVX512_VP2INTERSECT
  • AVX512_FP16

This is, essentially, the full Sapphire Rapids AVX-512 support. That makes sense, given that this is the same core that’s meant to be in Sapphire Rapids (albeit with cache changes). The core also supports dual AVX-512 ports, as we’re detecting a throughput of 2 per cycle on 512-bit add/subtracts.

For performance, I’m using our trusty 3DPMAVX benchmark here, and compared to the previous generation Rocket Lake (which did have AVX-512), the score increases by a few percent in a scenario which isn’t DRAM limited.

(2-2) 3D Particle Movement v2.1 (Peak AVX)

Now back in that Rocket Lake review, we noted that the highest power consumption observed for the chip was during AVX-512 operation. At that time, our testing showcased a big +50W jump between AVX2 and AVX-512 workloads. This time around however, Intel has managed to adjust the power requirements for AVX-512, and in our testing they were very reasonable:

In this graph, we’re showing each of the 3DPM algorithms running for 20 seconds, then idling for 10 seconds. Each one has a different intensity of AVX-512, hence why the power is up and down. IN each instance, the CPU used an all-core turbo frequency of 4.9 GHz, in line with non-AVX code, and our peak power observed is actually 233 W, well below the 241 W rated for processor turbo.

Why?

So the question then refocuses back on Intel. Why was AVX-512 support for Alder Lake dropped, and why were we told that it is fused off, when clearly it isn’t?

Based on a variety of conversations with individuals I won’t name, it appears that the plan to have AVX-512 in Alder Lake was there from the beginning. It was working on early silicon, even as far as ES1/ES2 silicon, and was enabled in the firmware. Then for whatever reason, someone decided to remove that support from Intel’s Plan of Record (POR, the features list of the product).

By removing it from the POR, this means that the feature did not have to be validated for retail, which partly speeds up the binning and testing/validation process. As far as I understand it, the engineers working on the feature were livid. While all their hard work would be put to use on Sapphire Rapids, it still meant that Alder Lake would drop the feature and those that wanted to prepare for Alder Lake would have to remain on simulated support. Not only that, as we’ve seen since Architecture Day, it’s been a bit of a marketing headache. Whoever initiated that dropped support clearly didn’t think of how that messaging was going to down, or how they were going to spin it into a positive. For the record, removing support isn’t a positive, especially given how much hullaballoo it seems to have caused.

We’ve done some extensive research on what Intel has done in order to ‘disable’ AVX-512. It looks like that in the base firmware that Intel creates, there is an option to enable/disable the unit, as there probably is for a lot of other features. Intel then hands this base firmware to the vendors and they adjust it how they wish. As far as we understand, when the decision to drop AVX-512 from the POR was made, the option to enable/disable AVX-512 was obfuscated in the base firmware. The idea is that the motherboard vendors wouldn’t be able to change the option unless they specifically knew how to – the standard hook to change that option was gone.

However, some motherboard vendors have figured it out. In our discoveries, we have learned that this works on ASUS, GIGABYTE, and ASRock motherboards, however MSI motherboards do not have this option. It’s worth noting that all the motherboard vendors likely designed all of their boards on the premise that AVX-512 and its high current draw needs would be there, so when Intel cut it, it meant perhaps that some boards were over-engineered with a higher cost than needed. I bet a few weren’t happy.

Update: MSI reached out to me and have said they will have this feature in BIOS versions 1.11 and above. Some boards already have the BIOS available, the rest will follow shortly.

But AVX-512 is enabled, and we are now in a state of limbo on this. Clearly the unit isn’t fused off, it’s just been hidden. Some engineers are annoyed, but other smart engineers at the motherboard vendors figured it out. So what does Intel do from here?

First, Intel could put the hammer down and execute a scorched earth policy. Completely strip out the firmware for AVX-512, and dictate that future BIOS/UEFI releases on all motherboards going forward cannot have this option, lest the motherboard manufacturer face some sort of wrath / decrease in marketing discretionary funds / support. Any future CPUs coming out of the factory would actually have the unit fused out, rather than simply turned off.

Second, Intel could lift the lid, acknowledge that someone made an error, and state that they’re prepared to properly support it in future consumer chips with proper validation when in a P-core only mode. This includes the upcoming P-core only chips next year.

Third, treat it like overclocking. It is what it is, your mileage may vary, no guarantee of performance consistency, and any errata generated will not be fixed in future revisions.

As I’ve mentioned, apparently this decision didn’t go down to well. I’m still trying to find the name of the person/people who made this decision, and get their side of the story as to technically why this decision was made. We were told that ‘No Transistor Left Behind’, except these ones in that person’s mind, clearly.

 

The Intel 12th Gen Core i9-12900K Review: Hybrid Performance brings Hybrid Complexity Fundamental Windows 10 Issues: Priority and Focus
Comments Locked

474 Comments

View All Comments

  • mode_13h - Saturday, November 6, 2021 - link

    > the only way I think they can remedy this is by designing a new core from scratch

    I'm not sure I buy this narrative. In the interview with AMD's Mike Clark, he said AMD takes a fresh view of each new generation of Zen and then only reuses what old parts still fit. As Intel is much bigger and better-resourced, I don't see why their approach would fundamentally differ.

    > or scaling Gracemont to target Zen 4 or 5.

    I don't understand this. The E-cores are efficiency-oriented (and also minimize area, I'd expect). If you tried to optimize them for performance, they'd just end up looking & behaving like the P-cores.
  • GeoffreyA - Sunday, November 7, 2021 - link

    I stand by my view that designing a CPU from scratch will bring benefit, while setting them back temporarily. Of course, am no expert, but it's reasonable to guess that, no matter how much they change things, they're still being restricted by choices made in the Pentium Pro era. In the large, sweeping points of the design, it's similar, and that is exerting an effect. Start from scratch, and when you reach Golden Cove IPC, it'll be at lower power I think. Had AMD gone on with K10, I do not doubt it would never have achieved Zen's perf/watt. Sometimes it's best to demolish the edifice and raise it again, not going to the opposite extreme of a radical departure.

    As for the E-cores, if I'm not mistaken, they're at greater perf/watt than Skylake, reaching the same IPC more frugally. If that's the case, why not scale it up a bit more, and by the time it reaches GC/Zen 3 IPC, it may well end up doing so with less power. Remember the Pentium M.

    What I'm trying to say is, you've got a destination: IPC. These three architectures are taking different routes of power and area to get there. GC has taken a road with heavy toll fees. Zen 3, much cheaper. Gracemont appears to be on an even more economical road. The toll, even on this path, will go up but it'll still be lower than GC's. Zen, in general, is proof of that, surpassing Intel's IPC at a lower point of power.
  • GeoffreyA - Sunday, November 7, 2021 - link

    Anyhow, this is just a generic comment by a layman who's got a passion for these things, and doesn't mean to talk as if he knows better than the engineers who built it.
  • Wrs - Sunday, November 7, 2021 - link

    It's not trivial to design a core from scratch without defining an instruction set from scratch, i.e., breaking all backward compatibility. x86 has a tremendous amount of legacy. ARM has quite a bit as well, and growing each year.

    Can they redo Golden Cove or Gracemont for more efficiency at same perf/more perf at same efficiency? Absolutely, nothing is perfect and there's no defined tradeoff between performance and efficiency that constitutes perfect. But simply enlarging Gracemont to near Golden Cove IPC (a la Pentium M to Conroe) is not it. By doing so you gradually sacrifice the efficiency advantage in Gracemont, and might get something worse than Golden Cove if not optimized well.

    The big.LITTLE concept has proven advantages in mobile and definitely has merit with tweaks/support on desktop/server. The misconception you may have is that Golden Cove isn't an inherently inefficient core like Prescott (P4) or Bulldozer. It's just sometimes driven at high turbo/high power, making it look inefficient when that's really more a process capability than a liability.
  • GeoffreyA - Monday, November 8, 2021 - link

    Putting together a new core doesn't necessarily mean a new ISA. It could still be x86.

    Certainly, Golden Cove isn't of Prescott's or Bulldozer's nature and the deplorable efficiency that results from that; but I think it's pretty clear that it's below Zen 3's perf/watt. Now, Gracemont is seemingly of Zen's calibre but at an earlier point of its history. So, if they were to scale this up slowly, while scrupously maintaining its Atom philosophy, it would reach Zen 3 at similar or less power. (If that statement seems laughable, remember that Skylake > Zen 1, and Gracemont is roughly equal to Skylake.) Zen 3 is right on Golden Cove's tail. So why couldn't Gracemont's descendant reach this class using less power? Its design is sufficiently different from Core to suggest this isn't entirely fantasy.

    And the fashionable big/little does have advantages; but question is, do those outweigh the added complexity? I would venture to say, no.
  • mode_13h - Monday, November 8, 2021 - link

    > they're still being restricted by choices made in the Pentium Pro era.

    No way. There's no possible way they're still beholden to any decisions made that far back. For one thing, their toolchain has probably changed at least a couple times, since then. But there's also no way they're going to carry baggage that's either not pulling its weight or is otherwise a bottleneck for *that* long. Anything that's an impediment is going to get dropped, sooner or later.

    > As for the E-cores, if I'm not mistaken, they're at greater perf/watt than Skylake

    Gracemont is made on a different node than Skylake. If you backported it to the original 14 nm node that was Skylake's design target, they wouldn't be as fast or efficient.

    > why not scale it up a bit more, and by the time it reaches GC/Zen 3 IPC,
    > it may well end up doing so with less power.

    Okay, so even if you make everything bigger and it can even reach Golden Cove's IPC without requiring major parts being redesigned, it's not going to clock as high. Plus, you're going to lose some efficiency, because things like OoO structures scale nonlinearly in perf/W. And once you pipeline it and do the other things needed for it to reach Golden Cove's clock speeds, it's going to lose yet more efficiency, probably converging on what Golden Cove's perf/W.

    There are ways you design for power-efficiency that are fundamentally different from designing for outright performance. You don't get a high-performance core by just scaling up an efficiency-optimized core.
  • GeoffreyA - Monday, November 8, 2021 - link

    Well, you've stumped me on most points. Nonetheless, old choices can survive pretty long. I've got two examples. Can't find any more at present. The instruction fetch bandwidth of 16 bytes, finally doubled in Golden, goes all the way back to Pentium Pro. That could've more related to the limitations of x86 decoding, though. Then, register reads were limited to two or three per clock cycle, going back to Pentium Pro, and only fixed in Sandy Bridge. Those are small ones but it goes to show.

    I would say, Gracemont is different enough for it to diverge from Golden Cove in terms of perf/watt. One basic difference is that it's using a distributed scheduler design (following in the footsteps of the Athlon, Zen, and I believe the Pentium 4), compared to Pentium Pro-Golden Cove's unified scheduler. Then, it's got 17 execution ports, more than Zen 3's 14 and GC's 12. It's ROB is 256 entries, equal to Zen 3. Instruction boundaries are being marked, etc., etc. It's clock speed is lower? Well, that's all right if its IPC is higher than frequency-obsessed peers. I think descendants of this core could baffle both their elder brothers and the AMD competition.
  • GeoffreyA - Monday, November 8, 2021 - link

    Sorry for all the it's! Curse that SwiftKey!
  • mode_13h - Tuesday, November 9, 2021 - link

    > it's got 17 execution ports

    That's for simplicity, not by necessity. Most CPUs map multiple different sorts of operations per port, but Gracemont is probably designed in some way that made it cheaper for them just to have dedicated ports for each. I believe its issue bandwidth is 5 ops/cycle.

    > It's clock speed is lower? Well, that's all right if its IPC is higher than frequency-obsessed peers.

    It would have to be waaay higher, in order to compensate. It's not clear if that's feasible or the most efficient route to deliver that level of performance.

    > I think descendants of this core could baffle both their elder brothers and the AMD competition.

    In server CPUs? Quite possibly. Performance per Watt and per mm^2 (which directly correlates with perf/$) could be extremely competitive. Just don't expect it to outperform anyone's P-cores.
  • GeoffreyA - Wednesday, November 10, 2021 - link

    I'm out of answers. I suppose we'll have to wait and see how the battle goes. In any case, what is needed is some new paradigm that changes how CPUs operate. Clearly, they're reaching the end of the road. Perhaps the answer will come from new physics. But I wouldn't be surprised there's some fundamental limit to computation. That's a thought.

Log in

Don't have an account? Sign up now