A Quick Note on Architecture & Features

With pages upon pages of architectural documents still to get through in only a few hours, for today’s launch news I’m not going to have the time to go in depth on new features or the architecture. So I want to very briefly hit the high points on what the major features are, and also provide some answers to what are likely to be some common questions.

Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.

In terms of compute, there are not any notable feature changes here as far as gaming is concerned. How things work under the hood has changed dramatically at points, but from the perspective of a programmer, there aren’t really any new math operations here that are going to turn things on their head. RDNA of course supports Rapid Packed Math (Fast FP16), so programmers who make use of FP16 will get to enjoy those performance benefits.

With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.

The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.

Unique in consumer parts for the new 5700 series cards is support for PCI Express 4.0. Designed to go hand-in-hand with AMD’s Ryzen 3000 series CPUs, which are introducing support for the feature as well, PCIe 4.0 doubles the amount of bus bandwidth available to the card, rising from ~16GB/sec to ~32GB/sec. The real world performance implications of this are limited at this time, especially for a card in the 5700 series’ performance segment. But there are situations where it will be useful, particularly on the content creation side of matters.

Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.

The one notable change here is support for DisplayPort 1.4 Display Stream Compression. DSC, as implied by the name, compresses the image going out to the monitor to reduce the amount of bandwidth needed. This is important going forward for 4K@144Hz displays, as DP1.4 itself doesn’t provide enough bandwidth for them (leading to other workarounds such as NVIDIA’s 4:2:2 chroma subsampling on G-Sync HDR monitors). This is a feature we’ve talked off and on about for a while, and it’s taken some time for the tech to really get standardized and brought to a point where it’s viable in a consumer product.

AMD Announces Radeon RX 5700 XT & RX 5700 Addendum: AMD Slide Decks
Comments Locked

326 Comments

View All Comments

  • mode_13h - Tuesday, June 11, 2019 - link

    A year is still pretty new, for a process node. It probably didn't become economically viable for GPU-sized dies, until very recently.
  • CiccioB - Tuesday, June 11, 2019 - link

    Yes, and that's why AMD balance is so low at the end of the quarter.
    GPU sells are pulling AMD quarter results at low numbers as that division is loosing a lot of money with respect to the CPU division.
  • evernessince - Wednesday, June 12, 2019 - link

    Lol, we all know Nvidia set the pricing way back when turing launched. Blaming AMD for pricing set 6 months ago by Nvidia is just asinine.
  • eva02langley - Thursday, June 13, 2019 - link

    And offer twice the performance... price performance ratio are better than the RTX 2060.
  • xrror - Monday, June 10, 2019 - link

    It's like... serious question here.

    Was/are Polaris and Navi actually that bad power/perf wise?
    Or
    Did nVidia hit it out of the park so hard with Maxwell and Pascal that nobody else can catch up?

    Either way it sucks for those of us who game, and don't want to pay >$600 for a tangible upgrade from GTX1070 level and/or actually have usable 4K gaming.

    Pity the person who wants a good VR rig.

    (and no, this isn't an nVidia shill - I'd love to grab another AMD card, but whoever gets me a 4K gaming card for $400 first is gonna win it)
  • mode_13h - Monday, June 10, 2019 - link

    I think you're onto something. When Nvidia set about to design the Tegra X1, they had to focus on power-efficiency in a way they never did before. When they scaled up to a desktop GPU, this gave them a perf/W edge that ultimately translated into more perf. Just look at the performance gap between Kepler and Maxwell, even though they shared the same manufacturing node!

    AMD has taken a couple generations to wise up. It seems they are still on the journey.
  • V900 - Monday, June 10, 2019 - link

    Yes pretty much. Maxwell and Pascal were that great, even when NVIDIA is using an older/bigger node than AMD.

    We’ll see what Intel brings to the GPU market, though.

    As for a tangible upgrade to the 1070, the RTX 2070 is available for 450-500$ right now, so no, you wouldn’t have to spend >600$.
  • CiccioB - Tuesday, June 11, 2019 - link

    Anyone can catch up, if it wants to affors the costs of redoing its inefficient architecture.
    by passing from Kepler to Maxwell nvidia deeply redesigned the entire architecture (making it also a bit fatter, so a little more expensive) bu they knew that was the thing to do to create a better architecture.

    AMD started with GCN in 2012 and is proposing it's "Maxwell" in 2019.
    Despite the fact that the technology has advanced and beside the 7nm PP there are more things that they still lacks like all the new features nvidia put in Maxwell, Pascal and even more in Turing.
    They just started understanding that memory compression is an advantage instead of being wasted transistors. They are about 6 years back from this point of view.
  • mode_13h - Tuesday, June 11, 2019 - link

    They're definitely not 6 years behind! They introduced tile rendering in Vega, which Nvidia first brought out in Maxwell. So, perhaps more like 2-3 years.
  • CiccioB - Wednesday, June 12, 2019 - link

    On geometry capacity they are 6 years behind.
    Like for memory compression that allows nvidia to use about 33% less bandwidth which obliged AMD use expensive HBM on high end cards to non make enormous and expensive bus on GPUs that are already fatter than the competition for the same performance.
    Without talking about the double projection feature and the acceleration for voxel to better support volumetric lights and effects (which we can see only though GameWorks extension as no console engine is thought to support them because AMD has not dedicated acceleration for them and they would result in a slide show).

Log in

Don't have an account? Sign up now