A Quick Note on Architecture & Features

With pages upon pages of architectural documents still to get through in only a few hours, for today’s launch news I’m not going to have the time to go in depth on new features or the architecture. So I want to very briefly hit the high points on what the major features are, and also provide some answers to what are likely to be some common questions.

Starting with the architecture itself, one of the biggest changes for RDNA is the width of a wavefront, the fundamental group of work. GCN in all of its iterations was 64 threads wide, meaning 64 threads were bundled together into a single wavefront for execution. RDNA drops this to a native 32 threads wide. At the same time, AMD has expanded the width of their SIMDs from 16 slots to 32 (aka SIMD32), meaning the size of a wavefront now matches the SIMD size. This is one of AMD’s key architectural efficiency changes, as it helps them keep their SIMD slots occupied more often. It also means that a wavefront can be passed through the SIMDs in a single cycle, instead of over 4 cycles on GCN parts.

In terms of compute, there are not any notable feature changes here as far as gaming is concerned. How things work under the hood has changed dramatically at points, but from the perspective of a programmer, there aren’t really any new math operations here that are going to turn things on their head. RDNA of course supports Rapid Packed Math (Fast FP16), so programmers who make use of FP16 will get to enjoy those performance benefits.

With a single exception, there also aren’t any new graphics features. Navi does not include any hardware ray tracing support, nor does it support variable rate pixel shading. AMD is aware of the demands for these, and hardware support for ray tracing is in their roadmap for RDNA 2 (the architecture formally known as “Next Gen”). But none of that is present here.

The one exception to all of this is the primitive shader. Vega’s most infamous feature is back, and better still it’s enabled this time. The primitive shader is compiler controlled, and thanks to some hardware changes to make it more useful, it now makes sense for AMD to turn it on for gaming. Vega’s primitive shader, though fully hardware functional, was difficult to get a real-world performance boost from, and as a result AMD never exposed it on Vega.

Unique in consumer parts for the new 5700 series cards is support for PCI Express 4.0. Designed to go hand-in-hand with AMD’s Ryzen 3000 series CPUs, which are introducing support for the feature as well, PCIe 4.0 doubles the amount of bus bandwidth available to the card, rising from ~16GB/sec to ~32GB/sec. The real world performance implications of this are limited at this time, especially for a card in the 5700 series’ performance segment. But there are situations where it will be useful, particularly on the content creation side of matters.

Finally, AMD has partially updated their display controller. I say “partially” because while it’s technically an update, they aren’t bringing much new to the table. Notably, HDMI 2.1 support isn’t present – nor is more limited support for HDMI 2.1 Variable Rate Refresh. Instead, AMD’s display controller is a lot like Vega’s: DisplayPort 1.4 and HDMI 2.0b, including support for AMD’s proprietary Freesync-over-HDMI standard. So AMD does have variable rate capabilities for TVs, but it isn’t the HDMI standard’s own implementation.

The one notable change here is support for DisplayPort 1.4 Display Stream Compression. DSC, as implied by the name, compresses the image going out to the monitor to reduce the amount of bandwidth needed. This is important going forward for 4K@144Hz displays, as DP1.4 itself doesn’t provide enough bandwidth for them (leading to other workarounds such as NVIDIA’s 4:2:2 chroma subsampling on G-Sync HDR monitors). This is a feature we’ve talked off and on about for a while, and it’s taken some time for the tech to really get standardized and brought to a point where it’s viable in a consumer product.

AMD Announces Radeon RX 5700 XT & RX 5700 Addendum: AMD Slide Decks
Comments Locked

326 Comments

View All Comments

  • Korguz - Tuesday, June 11, 2019 - link

    SaberKOG91, yep.. that he is ...
  • Phynaz - Tuesday, June 11, 2019 - link

    When did Nvidia do that to mid range? You mean the 2xxx cards that have a ton more features than AMDs cards? Guess what, features cost money to implement.

    What AMD gave their fans today was a $50 price reduction and a power usage increase over the gtx 1080.

    Wait for Polaris.
    Wait for Vega
    Wait for Navi

    The correct answer has always been buy Nvidia now and enjoy!
  • SaberKOG91 - Tuesday, June 11, 2019 - link

    You mean the consumer cards that Nvidia designed with datacenter features and sold them to you by inventing ways of using them that no one cares about? DLSS only exists so that tensor cores aren't worthless to games. RTX only exists to sell high-margin Quadro cards. The 20 series barely improves on the 10 series for all of the other features of the card. It'll be years before any of the extended features of the RTX cards are actually made mainstream in games. If Nvidia cared about performance, they would have just scaled up the silicon used in the 16XX cards and gotten a huge boost in gaming performance. Instead they stuck a bunch RTX and Tensor Cores onto the die and sold you a workstation card that you can't even take advantage of. So all your old games are barely better and none of your new games can use the new features for a year after launch. And rather than keep up with inflation, they jack the price up a few hundred dollars over last generation and tell you it's all worth it.

    You're just too stupid to see how badly they screwed you over.
  • CiccioB - Tuesday, June 11, 2019 - link

    You are saying that introducing new feature to make the market advance over the now old classic rasterization rendering method, and the costs associated with this, is a wrong thing and the best strategy would have been packing more and more transistors to make the same old things just faster?

    If this Navi has finally that geometric boost we will finally see games with more polygons. Finally. since Kepler nvidia could support more than twice the number of polygons GCN has been able to and with the mesh shading in Turing they can now support more than 10x. But we are stuck with a little more than Wii model complexity due to GCN and game engine/assets developed for consoles.

    We are way back of what we could be just beacure GCN can't keep up with all the new functionalities and technology nvidia has introduced these years.
  • Spunjji - Tuesday, June 11, 2019 - link

    Pro tip, CiccioB - If you're starting a comment with something like "So you're saying", you're clearly signalling to anyone paying attention that you either:
    1) Didn't understand what the person was saying, or
    2) Are trying to deliberately misrepresent what the person was saying

    In this case, you're inferring that the poster said creating new features is bad while assuming that the cost Nvidia have attached to those features is necessary or inevitable.

    The truth is that while Ray Tracing will be a great addition to gaming when we have cards that can support it at reasonable performance levels, only the 2080Ti really makes the grade. That card costs significantly more than I paid for my entire gaming laptop. That's *not* a good value proposition by any stretch of the imagination.

    Nvidia could have introduced RTX at the ultra-high-end this generation and moved it downwards on the next - at least then we'd have still have some good value from their "mid-range" cards. Instead they pushed those features down to a level where they don't make any sense and used that to justify deflating the perf/$ ratio of their products.

    Saber's argument is pretty sound - these features only really make sense for AI right now, but Nvidia made a bet that they could get their gamer fans to subsidize that product development for them. It's good business sense and I don't begrudge them doing it, I just begrudge people for buying into it as if it's somehow The Only Right Thing To Do.
  • CiccioB - Tuesday, June 11, 2019 - link

    I do not understand your "intro" to your worthless comment.
    There have been statement saying that creating new features is not good because it requires lot of time for them to be adopted, so better wasting transistor to accelerate what we have now (and had since someone else introduced new features).

    Your statement here is worthless (and clearly expressed under red fanboysm):
    Nvidia could have introduced RTX at the ultra-high-end this generation and moved it downwards on the next [..]. Instead they pushed those features down to a level where they don't make any sense and used that to justify deflating the perf/$ ratio of their products.

    They started with this generation with big dies to include those new features at the level they could and you already said that they did to justify something you just hate without waiting for the next generation when the shrinking may enable those feature to be scaled down to the low-mid level of the market.
    You have said something that is possible to be achieved in a generation evolution of the architecture together with a die shrink, but your fanboysm in defense of something that AMD could not achieve in 3 years (since the launch of Polaris), just make you state that nvidia is bad because they haven't brought RTX to mainstream in 6 months.

    If RTX is going to take a couple of yyears instead of an entire console cycle to be adopted is also because nvidia wanted to sell expensive cards with those features.
    You are not obliged to buy them, you can just continue buy crappy GPUs on obsolete architecture that consume twice the power to get the same work (but not the new features) done if that make you happy.

    It's a free market and accusing a company to sell a more expensive product in the attempt to bring the market ahead (and not grinding it to an halt as AMD has done since it introduced GCN) it's clearly stupid and just denotes that you are just angry by the fact that AMD even with 7nm and 3 years of development didn't managed to get where nvidia is both in terms of features (which is not only RTX) and in efficiency.
    Because yes, those fatty die feature rich GPUs by nvidia can do more work (even without using the new features) with the same W that the new AMD's GPUs at 7nm can.

    The reality is this one.
    AMD with a PP of advantage can't keep up with nvidia efficiency and feature list and this is the real reason we have high prices. because 7nm is not cheap, and having shrunk GCN to get those Navi performance is another flop that is going to be payed when nvidia will on its turn shrink Turing to a new performance (and feature rich) levels.
  • Korguz - Tuesday, June 11, 2019 - link

    CiccioB
    your " statement is worthless " comment.. is also worthless.. as Spunjji is correct.. nvidia COULD of kept RTX to the ulta highend, say titan and 2080/ti, and then made a card for the 2070/2060 that did increase performance for every one else over the 10 series.. but, they didnt.. instead.. they want every one to pay for the ray tracing development.

    " brought RTX to mainstream in 6 months " RTX is NOT mainstream, far from it.. the cards are priced so only those with more money then brains, can buy them. which i assume.. is you Phynaz, due to your constant defending of RTX, and you just need something to justify the price you paid WAY to much for to get an RTX card ...

    " this is the real reason we have high prices." WRONG, nvidia put the prices where they are, cause over the last few years.. they keep charging more and more for their cards, when they didnt need to.. but all they were worried about.. was their PROFITS !! look at the comments by nvidia for their earnings call between 2018 and 2019, now that the crypto mining craze is dead... that alone shows nvidia is only worried about profits..
  • CiccioB - Wednesday, June 12, 2019 - link

    You have a convoluted mind, surely due to the fact that you are a red fanboy that cannot see the facts.
    1. There's really no reason at all to introduce a feature like raytracing only on high end card that are going to be maybe 5% of the market when it needs a big enough user base to be supported. It would have been only a way to see "hey, we are there, so AMD, think about it as well and catchup with us the next generation".
    2. nvidia has not put a gun on your head to "make you all pay for the raytracing development".
    You are free to buy whatever other card without RTX and stay in the cheap budget you have.
    3. New features have a cost, and it may shock you, but they have to be payed somehow buy the one that buy those GPUs. But you are a red fanboy and you are used to cheap crappy architectures which have not brought a single advancement over the last 10 years, so yes, you may be horrified by the idea that technological advancement have a cost that have to be repaid.
    4. At the end of you r worthless rant you have AMD launching a new generation which is a PP ahead that still can't reach competition efficiency and most important is pricing it at the same level of the competition with not of a single new feature introduced by it (despite the packet math). So now you have to by expensive crap with no advanced feature to have the same performance of classic game engines but still using more W (or if you want to play with voltages and clocks with the same power but a PP of advantage.. yes, that's the advancement we all were waiting for!).
    But don't stress. You can still buy the cheap power hungry Polaris crap with not new advanced features that AMD is selling at discount since the launch of the GTX680.
    That is going to help AMD to improve its balance and have more money to invest for the next generation. So that next generation when AMD chip will get still fatter for RT support yuou can still buy cheap GPUs and not pay for the new features and again help AMD to reach generations later the features introduced by the competition years before.
  • Korguz - Wednesday, June 12, 2019 - link

    and you dont ?? face.. for the most part.. nvida priced your coveted new feature out of the hands of most people, and even you must admit, that ray tracing on anything but a 2080, is almost useless cause of the performance hit.

    1: see above
    2: stay in the cheap budget ?? um sorry, but maybe you are still living at home, with next to no bills to pay, but some of us, have better things to spend our money on, like a mortgage, kids, food, etc... none of the people i know.. have RTX cards, and its because they cant justify the high prices your beloved nvidia is charging for them...
    3. i am a red fanboy ?? tell that to the 4 1060s i own, and the 3 readeon cards i also own, all in working comps.
    4. at least amd has priced it A LOT more affordable, that more could afford to buy, with out the main, for the time being, useless main feature that you cant really take advantage of...
    but i will guess.. you are an nvidia fan boy, who loves to pay for their over priced cards that have made the last few years... who lives at home, and there fore, has more money then brains....
  • CiccioB - Friday, June 14, 2019 - link

    You are a clueless AMD fanboy despite having some nvidia cards.
    I'm not for nvidia at all costs and there's not doubts that Turing cards are expensive.
    But you are just prompting the usual mantra "AMD is better because it has lower prices".
    The reality is that is has lower prices because it has worse products.
    In fact, now that they believe (and we'll see if that's true) they are rising the prices.

    The fact that Vega (and Polaris too) is sold at a discount price so that at the end of the quarter AMD has to cover its losses with the money coming from Ryzen is not a good thing even though it is good for your low budged pocket.
    It just is a sign that the products are so bad that they need very low prices to match they very low value. It's a simple marketing law that AMD fanboy constantly forget. Actaully, it is easy to recognize an AMD fanboy (or an ignorant, which is the same>) as they constantly use dumb reasons to justify their preferred company without knowing what are the real effects of the strategy that AMD is using.

    On Turing, the high prices are due to the large dies. You are not forced to buy those large dies and be happy with your obsoleted cheap technology. You think that ray tracing won't be useful until next or 2 generations. If we were waiting for AMD we would not have it in 10 years as they have not been able to bring a single technological advancement in 13 years (that's the launch of Terascale architecture by ATI).
    They just follow like a dogs does with its prey It is easy not to have to invest in new things and just discount products to make them appear economical better than they technologically actually are.
    Big dies, more W, low price to stay in par with lower tier products made by the competition.

    You may use all the red glasses you want to look at how the things stand with this Navi: but the reality is summed in 2 simple things:
    1. in 2019 with a completely new PP they matched Pascal perfrormance/W
    2. as soon as nvidia shrinks Turing they'll return in the dust as they deserve not having presented one new features on what it actually is a redesign of an obsolete architecture that should have dies in 2012 instead of being sold at discount for all these years making kids like you believing that low price = better products and never looking at the fact that it is an hole in fiscal quarters.

    And then you fanboy constantly speak about lack of money to do this and that. It's all about the same cause: bad products = low price = low margins = no money.

    They know about this and they are trying to make money before nvidia make its shrinks (which will be done when the new PP is cheaper because nvidia wants money not your charity) and Intel comes out with 10nm solutions (which is a bit further in time but they come and they will regain the market as they have before).

Log in

Don't have an account? Sign up now