Back in August, the United States Department of Energy and Cray announced plans for a third United States exascale supercomputer, El Capitan. Scheduled to be installed in Lawrence Livermore National Laboratory (LLNL) in early 2023, the system is intended primarily (but not exclusively) for use by the National Nuclear Security Administration (NNSA), who uses supercomputers in their ongoing nuclear weapons modeling. At the time the system was announced, The DOE and LLNL confirmed that they would be buying a Shasta system from Cray (now part of HPE), however the announcement at the time didn’t go into any detail about what hardware would actually be filling one of Cray’s very flexible supercomputers.

But as of today, the wait is over. This afternoon the DOE and HPE are announcing the architectural details of the supercomputer, revealing that AMD will be providing both the CPUs and accelerators (GPUs), as well as revising the performance estimate for the supercomputer. Already expected to be the fastest of the US’s exascale systems, El Capitan was originally commissioned as a 1.5 exaflop system seven months ago. However thanks to some late configuration changes, the DOE now expects the system to reach 2 exaflops once it’s fully installed, which would cement its place at the top of the US’s supercomputer inventory.

Overall, El Capitan is the second (and apparently final) system being built as part of the US DOE’s CORAL-2 program for supercomputers. Like the similar Frontier system, El Capitan comes with a $600 million price tag and is intended to ensure the US’s leadership in supercomputers in the exascale era. LLNL will be using the system to replace Sierra, their current IBM Power 9 + NVIDIA Volta supercomputer. All told, El Capitan will be 16 times more powerful than the system it replaces. LLNL will be using it primary for nuclear weapons modeling – substituting for actual weapon testing – while the system will also see secondary use as a research system in other fields, particularly those where machine learning can be applied.

US Department of Energy Exascale Supercomputers
  El Capitan Frontier Aurora
CPU Architecture AMD EPYC "Genoa"
(Zen 4)
AMD EPYC
(Future Zen)
Intel Xeon Scalable
GPU Architecture Radeon Instinct Radeon Instinct Intel Xe
Performance (RPEAK) 2.0 EFLOPS 1.5 EFLOPS 1 EFLOPS
Power Consumption <40MW ~30MW N/A
Nodes N/A 100 Cabinets N/A
Laboratory Lawrence Livermore Oak Ridge Argonne
Vendor Cray Cray Intel
Year 2023 2021 2021

El Capitan is the second exascale supercomputer win for AMD, who is also providing the CPUs and GPUs behind the 1.5 exaflops Frontier system for Oak Ridge National Laboratory. And indeed, at a high level El Capitan looks a whole lot like Frontier from a hardware perspective. With Cray serving as the prime contractor on both systems, El Capitan and Frontier are Cray Shasta systems, employing AMD’s processors along with Cray’s cabinets and their Slingshot interconnect technology. However in an interesting turn of events, LLNL is being just a bit more forthcoming about what specific hardware will be in their new supercomputer.

On the CPU side of matters, AMD will be supplying a standard version of their Zen 4-based “Genoa” EPYC processor. As it’s still two generations out from AMD’s current wares, the amount of information on Zen 4/Genoa is limited, but AMD is promising support for next-generation memory, Infinity Fabric 3, as well as broad promises of both single and multi-threaded performance leadership. Notably, this is a greater level of detail on the CPU than we currently have for Frontier, which is using an unspecified and customized next-generation EPYC CPU.

Meanwhile on the GPU side of matters, AMD and Cray are continuing to hold their cards rather close. While the companies are confirming that this will use a next-generation AMD GPU using a new architecture, they aren’t naming the architecture or offering too much in the way of details about it. For now, what they are saying is that these GPUs will be using next-generation HBM for their memory, and that they’ll bring support for mixed precision compute for improved deep learning performance.

On the whole, these broad specifications are very close to the GPU slated to be used in Frontier, so El Capitan may very well be using the same GPU, or at least a further derivative of it. From the nature of AMD’s comments about the part, it sounds like whatever it is, we should expect to find out more architectural details about it soon.

But perhaps the biggest part of today’s reveal is the interconnect. For the first time AMD is naming their 3rd generation Infinity Fabric, which will be used to connect the processors within each blade. Like Frontier, El Capitan will be running in a 4:1 configuration, with four GPUs hooked up to each CPU. For Infinity Fabric 3.0, AMD is promising further improvements to inter-chip bandwidth and latency. However the most interesting claim is that these IF 3.0 device nodes will support unified memory across the CPU and GPU, which is something AMD doesn’t offer today. Indeed even Frontier is only slated to offer coherency between the processors which is a step below a true unified memory model. The devil is in the details of course – a unified memory system does not necessarily mean fast access to other devices’ memory – but this stands to be a major leap for AMD as a unified memory system can improve both the ease in programming such a system, and improving its performance when running heterogeneous workloads.

Finally, as previously mentioned, tying together the nodes will be Cray’s own Slingshot interconnect. Among other things, Slingshot supports adaptive routing, congestion management, and quality-of-service features. The interconnect is capable of 200Gb/sec per port, with individual blades incorporating a port for each GPU in the blade so that other nodes can directly read and write data to a GPU’s memory.

Unfortunately, the DOE and Cray are not going into quite as much detail on the completed layout of the system. El Capitan is slated to use less than 40MW of power – and we’re told it’ll be "fairly substantially under that" – however at this time the DOE isn’t disclosing the total number of cabinets. But to put things in comparison, Frontier is slated to use 100 Shasta cabinets, with a total power budget lower than El Capitan. So we wouldn’t be too surprised to ultimately find out that part of the reason that El Capitan is 33% faster than Frontier is due to the DOE throwing more hardware at it and ordering more cabinets. But whatever the number, it’s going to be enough that El Capitan will be using direct liquid cooling.

Meanwhile, it’s interesting to note that in their press conference, LLNL took the time to mention that part of the performance boost for El Capitan over its initial order was due to the group’s procurement plan. LLNL noted that they used a “late-binding” strategy for El Capitan, deciding on the (Shasta) architecture early, and then picking the specific processors at a later point – presumably about as late as they could wait to make the decision. Ultimately LLNL cites this as giving them better results in the end, as they were able to pick the fastest hardware that could be made available. In other words, while the DOE and LLNL announced El Capitan back in August, they only recently decided that it would be AMD filling it.

Overall, El Capitan marks an important second exascale supercomputer win for AMD, while Cray will now be involved in all three US exascale systems. So it’s a big win for both vendors, and a continuation of momentum for AMD, who only just scored its first big supercomputer win in a long while with Frontier last year.

The fact that El Capitan is a derivative of Frontier also means that with all three exascale systems now locked in, it will be NVIDIA who finds themselves on the outside looking in for this generation. As we noted with the Frontier announcement, the Intel Aurora and the AMD Frontier/El Capitan systems are coming from full-service processor vendors that supply both CPUs and GPUs. Current-generation systems like Summit use mixed vendors – e.g. IBM + NVIDIA – so the move to integrated vendors is a big shift for these CPU + accelerator systems. And while it makes a lot of sense for LLNL to order a copy of one of the other exascale systems in the name of efficiency, it should be noted that US DOE supercomputer contracts are as much political as they are technical. The US has a vested interest in supporting a domestic supercomputer industry and ensuring there are viable competitors to help keep costs down (there used to be several), so with three major processor alliances/vendors in the US, someone was bound to end up the odd man out.

At any rate, El Capitan is scheduled for delivery in early 2023. And with AMD’s annual Financial Analyst Day scheduled for tomorrow, hopefully we’ll be getting a better picture of where Genoa fits into AMD’s roadmaps, and perhaps a bit more on what to expect on the hardware that will eventually be powering the world’s fastest supercomputer.

Sources: LLNL, HPE

Comments Locked

53 Comments

View All Comments

  • DominionSeraph - Wednesday, March 4, 2020 - link

    Half a billion dollars for a supercomputer with no drivers...
  • extide - Wednesday, March 4, 2020 - link

    Oh my sweet summer child...
  • PeachNCream - Wednesday, March 4, 2020 - link

    No drivers should not be a problem since this computing system is not going to operate on public roads or streets. It will likely sit indoors in at a fixed physical location and transport to that location will happen as parts that are assembled on site so that piece of the puzzle will likely depend on a shippong company that supplies its own vehicle operators.
  • Irata - Thursday, March 5, 2020 - link

    Hehe...great reply
  • eva02langley - Wednesday, March 4, 2020 - link

    ROFL... this is the only thing Nvidia fanboys can come with... drivers... I have zero issues with my 5700XT... same with Steve who tested dozens of them... ZERO ISSUES.

    https://www.youtube.com/watch?v=1uynVO4ZXl0
  • eva02langley - Wednesday, March 4, 2020 - link

    By the way, these are not gaming GPUs on windows, they are used for creating massive simulations and rendering in server environment.
  • 69369369 - Wednesday, March 4, 2020 - link

    Gamer(TM) detected.
  • TeXWiller - Wednesday, March 4, 2020 - link

    Funny, but maybe this quote will alleviate the concerns related to the state of the software environment: “As part of this procurement, the Department of Energy has provided additional funds beyond the purchase of the machine to fund non-recurring engineering efforts and one major piece of that is to work closely with AMD on enhancing the programming environment for their new CPU-GPU architecture.”

    It will run nuclear crisis, I tell you!
  • jerry_watson14 - Wednesday, March 4, 2020 - link

    So true, what gets me is that AMD found a customer who will pay for their driver and software improvements on top of the equipment they are buying. We consumers will benefit greatly over the next 3-4 years. Maybe AMD will develop programing tools to accelerate x86 workloads for the Zen Arch. Along with gpu drivers or MCM for Instinct gpus before Nvidia! And I thought 2020 was going to be a very good year with Zen 3 and RDNA2. Man look at what we have coming in 2023! Intel can't even come close to Zen 3! What are they going to do by 2023? I bet they ask congress for a government bailout! OMG! Too Big to Fail all over again! What do you folks think? Should congress give a bailout to Intel in the near future? What about all those jobs world wide if Intel went bankrupt? I remember when they said the Banks where to big to fail. Look at how much that has cost US.
  • Makaveli - Wednesday, March 4, 2020 - link

    Government Bail out for intel by 2023 ?

    Did you look at how much money they have lol.

Log in

Don't have an account? Sign up now