AMD Announces Radeon RX 5700 XT & RX 5700: The Next Gen of AMD Video Cards Starts on July 7th At $449/$379by Ryan Smith on June 10, 2019 7:20 PM EST
A Quick Note on Architecture & Features
With pages upon pages of architectural documents still to get through in only a few hours, for today’s launch news I’m not going to have the time to go in depth on new features or the architecture. So I want to very briefly hit the high points on what the major features are, and also provide some answers to what are likely to be some common questions.
Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.
In terms of compute, there are not any notable feature changes here as far as gaming is concerned. How things work under the hood has changed dramatically at points, but from the perspective of a programmer, there aren’t really any new math operations here that are going to turn things on their head. RDNA of course supports Rapid Packed Math (Fast FP16), so programmers who make use of FP16 will get to enjoy those performance benefits.
With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.
The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.
Unique in consumer parts for the new 5700 series cards is support for PCI Express 4.0. Designed to go hand-in-hand with AMD’s Ryzen 3000 series CPUs, which are introducing support for the feature as well, PCIe 4.0 doubles the amount of bus bandwidth available to the card, rising from ~16GB/sec to ~32GB/sec. The real world performance implications of this are limited at this time, especially for a card in the 5700 series’ performance segment. But there are situations where it will be useful, particularly on the content creation side of matters.
Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.
The one notable change here is support for DisplayPort 1.4 Display Stream Compression. DSC, as implied by the name, compresses the image going out to the monitor to reduce the amount of bandwidth needed. This is important going forward for 4K@144Hz displays, as DP1.4 itself doesn’t provide enough bandwidth for them (leading to other workarounds such as NVIDIA’s 4:2:2 chroma subsampling on G-Sync HDR monitors). This is a feature we’ve talked off and on about for a while, and it’s taken some time for the tech to really get standardized and brought to a point where it’s viable in a consumer product.
Post Your CommentPlease log in or sign up to comment.
View All Comments
arakan94 - Monday, June 10, 2019 - linkAMD needs to keep reasonable margins - at least 45%. They tried to offer stuff at very low prices for a long time and often had significantly better value than Nvidia and still didn't gain very much market share (volume would offset lower margin). So I imagine they thought "fuck it" and went for normal pricing this time - same thing for Ryzen 3000.. No more charity prices.
If Nvidia lowers prices, then AMD will lower as well but there is no reason to aggressively undercut them. I mean.. Look at average gamers - Nvidia releases overpriced shit with often worse value than Pascal and people still buy it. How do you compete with that mentality?
Also, inflation. 300$ now isn't 300$ ten years ago.
Meteor2 - Tuesday, June 11, 2019 - linkYeah, but I think it’s widely that in tech, cash prices don’t go up. Maybe they will no longer go down over time as they have in previous decades, but people just don’t accept steep prices rises for tech. Apple’s iphone sales are dropping for a reason, even with the boom of the Chinese middle class.
rarson - Thursday, June 20, 2019 - link"Yeah, but I think it’s widely that in tech, cash prices don’t go up. Maybe they will no longer go down over time as they have in previous decades, but people just don’t accept steep prices rises for tech."
That's absurd. Prices do indeed go up, and they have for years. 2 decades ago, a high-end consumer graphics card was $299. Even if we adjust for inflation, that's only $469. That's a far cry from the $1200 that the 2080 Ti commands. Granted, the performance range of graphics hardware is a lot greater now than it was back then, but that doesn't change the fact that people are indeed willing to pay greater prices for better performance.
Qasar - Friday, June 21, 2019 - link" but that doesn't change the fact that people are indeed willing to pay greater prices for better performance "
maybe you are willing to pay, but not all of us, at some point, there has to be a price that is just too high, and the prices that nvidia charges for the 2070 and up, is at that point, the 2080 and titan, are well passed that point
Meteor2 - Sunday, June 30, 2019 - linkI would posit that the FX5900 and Radeon 9800 are better references, which launched at $499. That’s when the top-of-the-range was clearly established.
Nvidia have jacked that up to $699 and more; too much.
wumpus - Tuesday, June 11, 2019 - linkMargins don't help if you don't have volume. They still need to cover the NRE (design costs) of the boards and those prices aren't helping. It's pretty bad when your competitor can slap on 20% more transistors and pass on the "raytracing tax" on consumers and you can't really compete with those boards.
Gastec - Tuesday, June 11, 2019 - linkHow many Watts does your Radeon VII consume?
eva02langley - Thursday, June 13, 2019 - linkSimilar to the 2080. You can even achieve better power than the 2080 when undervolting the card.
Basically, Vega 7nm is roughly on par with Turing in term of power basically making RDNA supposed to be around 25-35% more efficient than Turing.
That's with the numbers provided, of course. We will find out in July.
Meteor2 - Tuesday, June 11, 2019 - linkLet’s be clear.
The 5700XT is $449 vs $499 for the 2070, and is very slightly faster at 1440p. FPS/$ is better.
The 5700 is $379 vs $349 for the 2060, and is slightly faster. FPS/$ about the same.
These two cards are competitive. Nothing more, nothing less.
webdoctors - Tuesday, June 11, 2019 - linkExcept they're being released a year later and all over the web and even on this site ppl kept saying the AMD cards would be hugely cheaper and ppl were getting ripped off. Now we find Radeon 7 and other new cards are priced about the same as Nvidia's cards.
You mean all the anonymous kids posting on this site comments section were wrong? We're not going to get cards below cost? INCONCEIVABLE!