AMD’s 2022-2024 Client GPU Roadmap: RDNA 3 This Year, RDNA 4 Lands in 2024by Ryan Smith on June 9, 2022 4:40 PM EST
Among the slew of announcements from AMD today around their 2022 Financial Analyst Day, the company offering an update to their client GPU (RDNA) roadmap. Like the company’s Zen CPU architecture roadmap, AMD has been keeping a 2 year horizon here, essentially showing what’s out, what’s about to come out, and what’s going to be coming out in a year or two. Meaning that today’s update gives us our first glace at what will follow RDNA 3, which itself was announced back in 2020.
With AMD riding a wave of success with their current RDNA 2 architecture products (the Radeon RX 6000 family), the company is looking to keep up that momentum as they shift towards the launch of products based on their forthcoming RDNA 3 architecture. And while today’s roadmap update from AMD is a high-level one, it none the less offers us the most detailed look yet into what AMD has in store for their Radeon products later this year.
RDNA 3: 5nm with Next-Gen Infinity Cache & Chiplets
First and foremost, AMD is targeting a greater-than 50% performance-per-watt uplift versus RDNA 2. This is a similar uplift as they saw moving from RDNA (1) to RDNA 2, and while such a claim from AMD would have seemed ostentatious two years ago, RDNA 2 has given AMD’s GPU teams a significant amount of renewed credibility.
Thankfully for AMD, unlike the 1-to-2 transition, they don’t have to find a way to come up with a 50% uplift based on architecture and DVFS optimizations alone. RDNA 3 will be built on a 5nm process (TSMC’s, no doubt), which is a full node improvement from the TSMC N7/N6 based Navi 2x GPU family. As a result, AMD will see a significant efficiency improvement from that alone.
But with that said, these days a single node jump on its own can’t deliver a 50% perf-per-watt improvement (RIP Dennard scaling). So there are several architecture improvements planned for RDNA 3. This includes the next generation of AMD’s on-die Infinity Cache, and what AMD is terming an optimized graphics pipeline. According to the company, the GPU compute unit (CU) is also being rearchitected, though to what degree remains to be seen.
But the biggest news of all on this front is that, confirming a year’s worth of rumors and several patent applications, AMD will be using chiplets with RDNA 3. To what degree, AMD isn’t saying, but the implication is that at least one GPU tier (as we know it) is moving from a monolithic GPU to a chiplet-style design, using multiple smaller chips.
Chiplets are in some respects the holy grail of GPU construction, because they give GPU designers options for scaling up GPUs past today’s die size (reticle) and yield limits. That said, it’s also a holy grail because the immense amount of data that must be passed between different parts of a GPU (on the order of terabytes per second) is very hard to do – and very necessary to do if you want a multi-chip GPU to be able to present itself as a single device. We’ve seen Apple tackle the task by essentially bridging two M1 SoCs together, but it’s never been done with a high-performance GPU before.
Notably, AMD calls this an “advanced” chiplet design. That moniker tends to get thrown around when a chip is being packaged using some kind of advanced, high-density interconnect such as EMIB, which differentiates it from simpler designs such as Zen 2/3 chiplets, which merely route their signals through the organic packaging without any enhanced technologies. So while we’re eagerly awaiting further details of what AMD is doing here, it wouldn’t at all be surprising to find out that AMD is using a form of Local Si Interconnect (LSI) technology (such as the Elevated Fanout Bridge used for the MI200 family of accelerators) to directly and closely bridge two RDNA 3 chiplets.
RDNA 4: Furthering AMD’s Performance & Efficiency in 2024
And while AMD prepares to bring RDNA 3-based GPUs to the market, the company is already hard at work at its successor.
RDNA 4, as it’s being aptly named, will be AMD’s next-generation GPU architecture for 2024. Unlike today’s Zen 5 reveal, we’re getting almost no details here – though that was the case for the RDNA 3 reveal in 2020 as well. As a result, there’s not a whole lot to dissect at this moment about the architecture other than the name.
The one thing we do know is that RDNA 4 GPUs will be manufactured on what AMD is terming an “advanced node”, which would put it beyond the 5nm node being used for RDNA 3. AMD made a similarly obfuscated disclosure in 2020 for RDNA 3, and as was the case back then, AMD is seemingly keeping the door open to making a final decision later, when the state of fabs for the 2024 timeframe is better established. One of TMSC’s 3nm nodes would be the most ideal outcome here, however a 4nm node is not off of the table – especially if AMD has to fight for capacity. (As cool as consumer GPUs are, other types of products tend to be more profitable on a mm2 basis)
Finally, like AMD’s Zen 5 architecture, RDNA 4 is expected to land in 2024. With AMD having established a pretty consistent two-year GPU cadence in recent years, a launch in the latter half of 2024 is not an unreasonable guess. Though there’s still a lot of time to go until we reach 2024.
Post Your CommentPlease log in or sign up to comment.
View All Comments
mode_13h - Sunday, June 12, 2022 - link> M1 Ultra is beating the pants off both Nvidia and AMD in perf/watt.
As a 5 nm chip, it should be compared against other 5 nm chips. So, you'll have to wait for RDNA 3 and Lovelace mobile GPUs, if you want to know which architecture is truly more efficient.
Note: I say *mobile* GPUs, because the Ultra is using what's essentially a laptop chip. Desktop GPUs operate well above their peak efficiency point. So, it's not until we have them in mobile form that we can make meaningful efficiency comparisons.
Kangal - Thursday, June 30, 2022 - linkThis is true.
At the end, it'll come down mainly between "Nvidia's mature support" Versus "Apple's vertical optimisations". Apple's solution should win-out in the long run, but it will be very interesting to see how far away that is. Still, we have to compare products that are available in the here-and-now, and plot the performance against the battery life.
So I'm mostly curious about a direct/indirect comparison:
Resident Evil 8, Metal3, macOS 13/Ventura, M2 Max chipset, 32cu iGPU, 32GB uniRAM
Resident Evil 8, Vulkan 1.3, Windows10 Pro, AMD r7-6800H, RX-6800S-8gb, 16GB RAM
Resident Evil 8, DirectX 12.5, Windows11, Intel Core i7-1280P, RTX-3070Q-8gb, 16GB RAM
eg/ 2022 16in MacBook Pro, vs, 15in ASUS Zephyrus G15, vs, 17in Razer Blade
Khanan - Friday, June 10, 2022 - linkAgain that annoying “RNDA” typo, this time in a article. :D
brucethemoose - Saturday, June 11, 2022 - linkWould this imply big driver changes too?
I hear tiled rendering gpus are easier to split up and tend to conserve memory (and therefore interconnect??) bandwidth, but afaik thats not what the amd drivers do now.
Surely this will manifest in the linux drivers too, and what they change will be public.
mode_13h - Sunday, June 12, 2022 - linkAll I know about that is Vega first added support for tiled rendering, with the DSBR. I'm not sure RDNA has some version of that engine, or if it uses a different approach.
However, for Infinity Cache to work as well as it does, one would expect they're doing TBDR.
Khanan - Sunday, June 12, 2022 - linkI don’t think so. I bet they are trying hard to make this GPU work and be seen for the driver as a single chip, so any driver optimizations would be towards the new architecture not the chiplet design per se.
Victor_Voropaev717 - Tuesday, June 14, 2022 - linkI think it is interesting