As AMD was in the process of ramping up for the Polaris launch last year, one of the unexpected but much appreciated measures they took was to release a bare-bones GPU architecture roadmap for the next few years. AMD has traditionally held their cards very close to their proverbial chest on what they’re working on next, typically only announcing a new architecture weeks before it’s set to launch in retail products. Polaris itself was a departure from that, as it was announced a good 5 months in advance, but last year’s roadmap was the first honest-to-goodness look we’ve had at AMD’s long-term plans in a long time.

What did that map show us? After 2016’s Polaris would come Vega, and after that would be Navi. As a high-level roadmap it didn’t show us much – in fact other than a timeframe, the only detail attached to Vega was “HBM2”  - but it was enough to help understand one of the things AMD would be doing architecturally to set Vega apart from Polaris. As for the timeframe itself, that was ambiguous at best in AMD’s roadmap. But now as we draw closer to the launch of Vega, the picture has become clearer. AMD will be hitting a yearly cadence with Vega. The first chip, which tapped out last year, will be launching in the first half of this year (H1’17).

To that end, with Vega’s launch not too far over the horizon, AMD is ready to start talking about what will be their next GPU architecture. Last year at this time we got our first real glimpse into Polaris and what would become the Radeon RX 480/470/460, and this year AMD is back again with a teaser of things to come with Vega.

Setting The Stage: Expectations Management; Less How & More Why

Before we dive into any architectural details, perhaps it’s best we first set the stage. This goes for both what to expect of today’s announcement, and to better understand what AMD is doing and why.

First and foremost, today’s detail release is a teaser, not a deep dive, or even a preview. AMD is only releasing a few details about Vega, and those are being kept at a high level. In fact it’s fair to say that there’s just enough information to answer little and raise even more questions; just what a proper teaser should be.

Why? Well part of the reason is that we’re still months off from the launch of Vega. I believe it’s fair to say that by announcing a first-half of the year launch date when we’re already in 2017 is a strong indicator that Vega will not launch until later in that window, likely some time in Q2. So we’re still a good three to five months out from the launch of Vega, which means AMD doesn’t want to (or need to) release too many details this far out. Rather they can trickle out chosen details for maximum impact.

At the same time the AMD of 2017 has more they can focus on in the high-performance space than just GPUs. Ryzen launches soon, and they also have other products on the horizon such as the Radeon Instinct accelerators. Polaris received as much detail as it did because it was all AMD really had to talk about, and they needed to recover from a rough 2015 where AMD’s at-the-time power efficiency woes were brought into full focus. But now Vega can share the stage with Ryzen and other products, and that lets AMD be more selective about what they say.

All of which is something I would argue is a good thing. At the end of the day Polaris was an optimized version of the GCN 1.2 (aka GCN 3) architecture for GlobalFoundries’ 14nm FinFET process. The resulting GPUs were solid competitors in the mainstream and value markets, improving on AMD’s power efficiency in a way they badly needed. But they weren’t high-end parts; they didn’t excite like those parts did, and for technology enthusiasts they didn’t significantly change the architecture itself (in fact GCN 4 was ISA compatible with GCN 3, something that doesn’t happen a lot in the GPU space). AMD talked big about Polaris – perhaps too big – and I do think it hurt them in some circles once it became clearer that this was AMD catching up. Which is not to say that AMD’s marketing arm won’t talk big about Vega as well, but they need not ride the technology angle so hard. Vega is a launch that can be more natural and more subdued, especially as at this point we know AMD is aiming big with a much-needed new generation of high-end parts.

In any case, as AMD isn’t riding the technology angle quite as hard in this year’s teaser, they are spending a bit more time explaining the market and some of the logic behind Vega’s design. For its teasing debut, Vega is little less discussion of “how,” and a little more conversation of “why”.

So what is AMD looking to do with Vega? Besides aiming for the high-end of the market, AMD is looking at how the market for GPUs has changed in the last half-decade, and what they need to do to address it. Machine learning is one part of that, being a market that has practically sprung up overnight to become a big source of revenue for GPUs. This is where the previously announced Radeon Instinct will fit in.

But more than that, it’s about fundamental shifts in how workloads are structured. GPU performance growth has far outpaced GPU memory capacity. Scene geometry complexity has continued to grow. Newer rendering methods have significantly changed GPU memory access patterns.

To that end, AMD is looking to address all of these factors with Vega. Which is not to say that this is everything – this is a teaser, after all – but this is where AMD is starting. Where they are going to be with their next generation architecture and how they believe it will address the changes in the market. So without further ado, let’s take a teasing look at what the future has in store for AMD’s GPUs.

Vega’s NCU: Packed Math, Higher IPC, & Higher Clocks
Comments Locked

155 Comments

View All Comments

  • Michael Bay - Thursday, January 5, 2017 - link

    It`s awesome when your GPU is not burning the whole computer up, true.
  • TheinsanegamerN - Thursday, January 5, 2017 - link

    Hence why you do not buy GTX 480s.
  • lobz - Thursday, January 5, 2017 - link

    u got burned by your fermis, didn't you? =}
  • LordanSS - Saturday, January 7, 2017 - link

    My old FX 5800 was the loudest, hottest card I've ever owned. I didn't own a Fermi but I've read those weren't very nice either.
  • silverblue - Thursday, January 5, 2017 - link

    I'm not sure I follow. Are you saying that David Kanter's discovery forced AMD to bake TBR into Vega?
  • wumpus - Thursday, January 5, 2017 - link

    Looks that way, but the timeline is impossible. What drove both Nvidia and AMD to tiling was that HBM memory wasn't working and they need serious bandwidth to the framebuffer. Tiling is an obvious way around that issue.

    AMD just found out that HBM wasn't going to work a generation late (because FuryX never took off. And probably will *still* need to tile if they use HBM2).
  • xenol - Thursday, January 5, 2017 - link

    And congrats to NVIDIA for using a 12 year old design finally.

    Tiled rendering is ancient in computing terms. It's at least as old as the Dreamcast which had that PowerVR GPU that did tiled rendering.
  • BrokenCrayons - Thursday, January 5, 2017 - link

    Pre-Dreamcast actually. PowerVR's second generation video processor was in the Dreamcast, but the first generation was relased to market on a 32-bit PCI card and chips were out on the market in 1996 making the concept of tiling about 20 years old.
  • TesseractOrion - Thursday, January 5, 2017 - link

    I had a first generation PowerVR card add-in card (supporting a generic 2d card) and later, a PowerVR Kyro (I think it was called). Was it Imagination Technologies? Can't be bothered to look it up LOL
  • BrokenCrayons - Thursday, January 5, 2017 - link

    Yup the first PowerVR card didn't work as a standalone graphics adapter. I had one too from Matrox, IIRC. It worked pretty well in original Unreal (non-tournament version). I had a lot of fun on Deck 16 with that thing until I replaced it and the S3 ViRGE DX with a Diamond Viper V550 and a Voodoo 2. It was good for 640x480.

    You're right that their later iterations were sold under the Kyro branding. I sold one of them from my computer shop inside of a custom built desktop...well actually, it was a Kyro II not the original. The techs tinkered with it a little during build, but I didn't get a chance to mess with it. The other partner and I were pulling several miles of wire in one of our business client's new offices so it was out the door before I got more than a glance in passing at what it could do. It was fairly competitive with a GeForce 2 if you put enough CPU power behind it (I think it lacked hardware TnL so it needed the processor power).

Log in

Don't have an account? Sign up now