AMD Launches Radeon Mobility 7700M, 7800M, and 7900M GPUs

Late last year, AMD pushed out the first of their Southern Islands GPUs for the desktop, the HD 7970. Around the same time, AMD also announced their first 7000M mobile GPUs. Since then, AMD has gone on to launch the HD 7950, the HD 7750/7770, and the HD 7850/7870. Meanwhile, on the mobile front we’ve had to sit back and wait…and wait. Today, the waiting ends, at least for one of the parts: the HD 7970M is now shipping in select notebooks, and the other GPUs will likely start showing up in other laptops and notebooks over the coming weeks.

As is customary for AMD and NVIDIA, new GPUs debut on the desktop, and after a while they trickle down into the mobile world. NVIDIA actually pulled a fast one with GK107 actually coming out ahead of the desktop GK104, which may be a sign of the changing times, but AMD’s GCN is sticking with the traditional route of using lower clocked power optimized versions of already launched desktop chips for their mobile parts. Not that there’s anything wrong with that from a business standpoint, but it does make laptop users feel like second class citizens. Before we continue the discussion, let’s list the specs.

AMD Radeon HD 7900M, 7800M, and 7700M
  Radeon HD 7900M Radeon HD 7800M Radeon HD 7700M
Core Name Wimbledon Heathrow Chelsea
Stream Processors 1280 640 512
Texture Units 80 40 32
ROPs 32 16 16
Z/Stencil 128 64 64
L2 Cache 512KB 512KB 512KB
Core Clock 850MHz 800MHz 675MHz
Memory Clock 4.8GHz 4.0GHz 4.0GHz
Memory Type 2GB GDDR5 2GB GDDR5 2GB GDDR5
Memory Bus Width 256-bit 128-bit 128-bit
Memory Bandwidth 153.6GB/s 64GB/s 64GB/s
PCI Express 3.0 3.0 2.1

Starting at the high end, AMD will have the 7900M series. Note that there will be more than one part for each family, but AMD is currently providing the configuration for the highest performance parts in each category. At the top, the HD 7970M uses a fully enabled Pitcairn core. The GPU clock is 850MHz compared to 1000MHz (stock) on the desktop HD 7870, but surprisingly AMD is going whole hog on the RAM and featuring 2GB of 4.8GHz GDDR5. That makes the 96GB/s bandwidth of NVIDIA’s GTX 580M/675M positively pale in comparison, and we’d wager the GPU performance will easily reclaim the mobile performance crown—at least until NVIDIA launches the inevitable GTX 680M, but we don’t know when that will be. Going by the core clocks, the HD 7970M should be about 15% slower than the HD 7870, and given the fact that no consumer laptops have yet shipped with an LCD resolution above 1920x1200, you can look at our HD 7870 benchmarks and subtract 15% to get a pretty good idea of how the HD 7970M will perform.

AMD was also "kind" enough to provide a comparison slide with their own benchmarks, showing performance relative to the HD 6990M. As always, take these graphs for what they're worth:

HD 6990M was certainly no slouch as far as mobile gaming is concerned; you can see how it stacked up against the GTX 580M in our Alienware M18x head-to-head in both single- and dual-GPU configurations. The quick summary is that across the eight games we tested last year, SLI GTX 580M averaged out to approximately 8% faster than CrossFire HD 6990M; for single GPUs, the result is more in NVIDIA’s favor: the GTX 580M was 8% faster at our Ultra settings, and 17% faster at our High settings. Assuming AMD’s numbers are correct (and given the amount of memory bandwidth and GPU cores we’re looking at, we see no reason why they wouldn’t be), it looks like HD 7970M will be around 45% faster than the HD 6990M on average, or about 25% faster than a single GTX 580M.

AMD also presented some data showing their estimate of performance results for HD 7970M vs. GTX 675M, which you can see below, though it appears that information was simulated using desktop hardware (Core i7-2600K) and while the numbers are likely accurate, the selection of games and the chosen settings could be debated. AMD obviously isn't an unbiased review source. Now we can wait for NVIDIA’s inevitable response with a high-end mobile Kepler.

The other two GPUs are an interesting pair. Both use the Cape Verde core, but where the 7800M is a fully enabled 640 core part, the 7700M disables a couple compute clusters and ends up with 512 cores and 32 texture units. AMD also clocks the 7700M lower, most likely to hit lower TDP targets for laptops rather than because of inherent limitations with the chips. Compared to the desktop parts, the (presumed) HD 7870M will run the core at 800MHz vs. 1000MHz on the HD 7770 GHz Edition, and memory is at 4GHz effective compared to 4.5GHz on the desktop parts. For the (again presumed) HD 7770M, the core will run at 675MHz compared to 800MHz on the desktop HD 7750.

There’s one other big difference between the HD 7700M and the HD 7800M: PCI Express 3.0 support will not be present on the 7700M. Before anyone gets too upset, we need to put things in perspective. First, while PCIe 3.0 has improved performance with HD 7970 on the desktop, for the HD 7700M we’re looking at a part that has only one fourth the compute power. Second, remember what we just said about HD 7700M being clocked lower most likely in order to hit TDP targets? We asked AMD about the removal of PCIe 3.0 support (given both families use Cape Verde, the potential is certainly there), and their response confirmed our suspicions: “The Cape Verde die itself supports PCIe 3; the reason we chose not to include it in our 7700M is because it mostly targets platforms where power saving is king, and the sacrifice (though not huge) in that regard would not have been justified by the small performance gain going from gen 2 to gen 3.”

And for the curious, once again AMD provided an estimate of performance for the 7870M vs. the GTX 560M (simulated using desktop hardware). Results are at 1920x1080/1920x1200 with a variety of quality settings, so take the following with a grain of salt.

GCN HD 7000M: Key Features and Technologies
Comments Locked

50 Comments

View All Comments

  • Brandenburgh_Man - Tuesday, April 24, 2012 - link

    When you first look at AMD's Radeon HD 7000 Performance graph, the vertical length of the Red bars make the HD 7970M look like it's, on average 3 times faster than the Yellow bars for the HD 6990M. Wow, what a BEAST!

    Then you look at the scale on the left and realize that, at best, it's only 60% faster and, on average, 40%. Such cheap tricks make me lose all respect for the company.
  • A5 - Tuesday, April 24, 2012 - link

    Pretty much every company, ever, has done something like this. Intel, Nvidia, Apple, Qualcomm, ARM, etc.
  • Gc - Tuesday, April 24, 2012 - link

    That doesn't mean AnandTech has to repeat their misleading graphs.
    AnandTech can strive to be a more accurate, less misleading source of information, not just a press release repeater.
  • JarredWalton - Tuesday, April 24, 2012 - link

    Our text dissects the information, and right now this is the only indication of performance that we have. I just assume everyone who comes here is smart enough to read the charts and understand what they mean. It's a press release for the most part, and we're not performing the tests. I certainly don't want to make an AnandTech style graph that people might actually interpret as being our own independent test results!
  • 6kle - Friday, April 27, 2012 - link

    "I just assume everyone who comes here is smart enough to read the charts and understand what they mean."

    I don't think this is a case of people being too dumb for your article. Why do you think there are so many complaints given for using misleading graphs? It's because when there are tons of readers there will always be some who forgot to "check", even if they are not dumb. It's called being human. Why put such "traps" in an article for people to look out for. You are basically saying people can't skim your articles quickly and need to read all the disclaimers you have placed somewhere in the text? Is that good journalism for a site that strives to provide accurate information.

    The only reason to use visual graphs is to give a visual representation of the difference by the means of comparing their size. Not making them start from "zero" completely destroys the very purpose of using them in the first place.

    If you need to start reading the numbers at the side of the graphs you might as well just drop the bars entirely and stick to using numbers only.

    I find the criticism very valid in this case and would hope for you to learn from it. I am sad to see that you are instead calling some people (indirectly) stupid. I think it's more stupid in this case to use those bars when it can be avoided.

    I would suggest you either correct the bars to start from zero or use numbers only (you can put a big clear label above them saying something like "Source: AMD Marketing" so people won't think they are test results from this site.
  • seapeople - Friday, April 27, 2012 - link

    Wow, so now in your mind anyone who skims an AnandTech article should be able to quickly glance at the relative height of bars in a graph and presume that AnandTech has conducted in-house tests and validated that card A is 4x better than card B, despite the article and graph both clearly being labeled as based on OEM-provided results only?

    I'm glad I don't work for a place like AnandTech, the posters would mentally destabilize me.
  • 6kle - Saturday, April 28, 2012 - link

    I don't know what you are talking about. You seem to be quoting me but I never said anything like that.
  • seapeople - Friday, April 27, 2012 - link

    I think Jarred should remake the graph with the minimum value at 0 and then have the maximum value be at 1 million. Would you be happy then?
  • seapeople - Friday, April 27, 2012 - link

    You must have trouble buying gas, then, because I'm sure someone like you would lose respect for someone who charges $3.99 and 99/100 for a gallon of gas which makes people look and think "Wow, I'm only paying three dollars a gallon!"
  • Tujan - Tuesday, April 24, 2012 - link

    Im confused. You say 'GPU' for a notebook. There is a photo of a 'pin' type processor. And this then is not for a pcie slot ect. Notebooks do not have dual sockets,and I thought that ATI was running APUs within most of their new product lines,and even in notebook.
    So what is the situation here ? Exactly how does a 'GPU'run in a notebook that has an APU. Or is this something that runs in a notebooks PCIe slot etc ? Somebody tell me where this 'fits' in,and what notebook platform/series they could be used in/for.

    Note:I dont have any notebooks. And haven't gazed a look at any curcuit board layouts for any.

Log in

Don't have an account? Sign up now