We’ve just returned from sunny Bellevue, Washington, where AMD held their first Fusion Developer Summit (AFDS). As with other technical conferences of this nature such as NVIDIA’s GTC and Intel’s IDF, AFDS is a chance for AMD to reach out to developers to prepare them for future products and to receive feedback in turn. While AMD can make powerful hardware it’s ultimately the software that runs on it that drives sales, so it’s important for them to reach out to developers to ensure that such software is being made.

AFDS 2011 served as a focal point for several different things going on at AMD. At its broadest, it was a launch event for Llano, AMD’s first mainstream Fusion APU that launched at the start of the week. AMD has invested the future of the company into APUs, and not just for graphical purposes but for compute purposes too. So Llano is a big deal for the company even though it’s only a taste of what’s to come.

The second purpose of course was to provide sessions for developers to learn more about how to utilize AMD’s GPUs for compute and graphics tasks. Microsoft, Acceleware, Adobe, academic researchers, and others were on hand to provide talks on how they’re using GPUs in current and future projects.

The final purpose – and what is going to be most interesting to most outside observers – was to prepare developers for what’s coming down the pipe. AMD has big plans for the future and it’s important to get developers involved as soon as is reasonably possible so that they’re ready to use AMD’s future technologies when they launch. Over the next few days we’ll talk about a couple of different things AMD is working on, and today we’ll start with the first and most exciting project: AMD Graphics Core Next.

Graphics Core Next (GCN) is the architectural basis for AMD’s future GPUs, both for discrete products and for GPUs integrated with CPUs as part of AMD’s APU products. AMD will be instituting a major overhaul of its traditional GPU architecture for future generation products in order to meet the direction of the market and where they want to go with their GPUs in the future.

While graphics performance and features have been and will continue to be important aspects of a GPU’s design, AMD and the rest of the market have been moving towards further exploiting the compute capabilities of GPUs, which in the right circumstances are capable of being utilized as massive parallel processors that can complete a number of tasks in the fraction of the time as a highly generalized CPU. Since the introduction of shader-capable GPUs in 2002, GPUs have slowly evolved to become more generalized so that their resources can be used for more than just graphics. AMD’s most recent shift was with their VLIW4 architecture with Cayman late last year; now they’re looking to make their biggest leap yet with GCN.

GCN at its core is the basis of a GPU that performs well at both graphical and computing tasks. AMD has stretched their traditional VLIW architecture as far as they reasonably can for computing purposes, and as more developers get on board for GPU computing a clean break is needed in order to build a better performing GPU to meet their needs. This is in essence AMD’s Fermi: a new architecture and a radical overhaul to make a GPU that is as monstrous at computing as it is at graphics. And this is the story of the architecture that AMD will be building to make it happen.

Finally, it should be noted that the theme of AFDS 2011 was heterogeneous computing, as it has become AMD’s focus to get developers to develop heterogeneous applications that effectively utilize both AMD’s CPUs and AMD’s GPUs. Ostensibly AFDS is a conference about GPU computing, but AMD’s true strength is not their CPU side or their GPU side, it’s the combination of the two. Bulldozer will be the first half of AMD’s future APUs, while GCN will be the other half.

Prelude: The History of VLIW & Graphics
Comments Locked

83 Comments

View All Comments

  • Targon - Sunday, June 19, 2011 - link

    AMD wants to put an end to the GPU in the chipset, but no one expects dedicated CPU and GPU to go away. Now, the code that would take advantage of the APU would probably work with a full AMD CPU/AMD GPU combination, so the software side of things would not need a lot of change to support both configurations.
  • khimera2000 - Sunday, June 19, 2011 - link

    Agree, dedicated cards will not go away, however intergrated cards like the past will.

    I think we see Eye to Eye on this. AMD wants to take full advantage of all its hardware, It looks like the way there trying to do it is by combining the CPU and Intergrated GPU into one package, after which they want to set it up so infromation that goes into that package dosent have to leave to be processed, like sending it out to ram from the CPU only to be read by the GPU.

    Still want to see how this will work across PCI-E. I can already see future reviews and comparisons on how effetive GPU acceleration is on there intergrated aproach VS discreet cards. AND Buying those discreet cards :D

    By the time these parts comes out my desktop will be right in the middle of its upgrade cycle :D
  • Targon - Monday, June 20, 2011 - link

    AMD needs to push for the HTX slot again for discrete video, where there is a direct HyperTransport link between the CPU and whatever is plugged into that slot. PCI-Express is decent, but HTX would and should blow the doors off PCI-Express.
  • rnssr71 - Friday, June 17, 2011 - link

    i wish this coming next year especially in Trinity but at lest they are heading in the right direction:) also, to those wondering about improvements in gaming ability, look what amd did with cayman vs cypress- the improved efficiency and noticeably improved performance on the same manufacturing. http://www.anandtech.com/bench/Product/294?vs=331
    GCN this is going to improve efficiency even farther and they are cutting the transistor size roughly in half.
  • nlr_2000 - Saturday, June 18, 2011 - link

    "Unfortunately, those of you expecting any additional graphics information will have to sight tight for the time being." sight = sit
  • EnerJi - Saturday, June 18, 2011 - link

    I wonder if this architecture would be a particularly good fit for a next-generation Xbox (due around 2013)? Any thoughts on this?
  • GaMEChld - Saturday, June 18, 2011 - link

    2013? I heard 2015, unless they recently changed dates to counter Nintendo. Anyways, I'm not so sure what benefits a console will realize from this, since full blown PC's barely get to utilize much of the technology we currently have access to. Multi-threading, 64-bit support, advanced cpu instructions are all available yet barely utilized features.

    Also, consoles are designed to be cost effective and relatively cheap, so usually modified older generation architecture is used. For example, the new Wii uses Radeon 4700 class graphics, which sounds old but is roughly twice as powerful as the X360 (Radeon X1900) or PS3 (GF7000) graphics.
  • DanNeely - Saturday, June 18, 2011 - link

    That's true of the Wii because Nintendo doesn't subsidize the console, but MS and Sony have gone after higher end GPUs for their last launches. The XBox 360 launched using a GPU similar to that of the ATI 1900, a bare month and a half after the card hit the market.The PS3 used a GF7800 derivative and launched roughly 1 year after the GF7800 did. The GF7900 was nVidias top of the line card at the time, but it was only a marginal improvement over the 7800.
  • swaaye - Saturday, June 18, 2011 - link

    PS3 actually launched about when G80 came out, which obviously made RSX look awfully retro when you saw 7900GTX SLI being beaten in reviews by a single board. ;) But G80 surely was never an option for a console due to size and power.

    Xenos has less than half of the pixel fillrate of X1900. X1900 also has 48 pixel shader units + 8 vertex shaders so it might have an advantage over Xenos 48 unified units, especially when clock speed and the access to a large RAM pool over a 256-bit bus are taken into account.
  • GaMEChld - Sunday, June 19, 2011 - link

    But we must also bear in mind that X360 and PS3 may have chosen high on the scale because of the concurrent shift to 720p/1080p resolution instead of the old 480p standard. At this point in time, the 1080p resolution is standardized, so greatly escalating GPU horsepower will show diminishing gains, since people aren't really going to be gaming on higher resolutions than the new standard tv resolution.

    What I mean is, if a Radeon 5000 Series could maximize all graphics quality at 1080p, why would a console manufacturer bother with more power?

    For example, you wouldn't buy a GTX590 or Radeon 6990 just to game on a 1080p monitor, would you?

    The only exception I can think of for this TV resolution argument is 3DTV gaming, in which case I am not well versed in the added GPU overhead required to render a 3D game.

Log in

Don't have an account? Sign up now