I was wondering when we'd start seeing bandwidth restrictions from 2.0 x8; Looks like Ivy Bridge will be a better than anticipated upgrade for those Z68 boards with 3.0 slots.
You'll see the limitation on the current 5970, 590, 6990, Mars2 when used on 8x and 16x 2.0. You won't see any limitation on 3.0 8x and 16x with current cards.
If I had to guess then I'd say that in 2 years the highend videocards at that time will be powerful enough to finally show a limitation on 16x 2.0 and 8x 3.0, but not 16x 3.0
Just as a counterpoint, Techspot just did an article on overclocking, and found that several mid-range cards hit around 15-17% overclock (and I believe this is on stock voltage). Link: http://www.techspot.com/review/486-graphics-card-o...
You may want to note that what's unique about the 7970 is not that it can get up to an 18% overclock on stock volts, but that it is a top-end card that has 18% headroom. The 6970 had nothing close to that as Techspot found, for instance, so with the more expensive 7970, the headroom should be factored into the cost equation - the price premium for top-end cards rarely come with this bonus.
As an example, the HD5850, which was introduced as a high mid-range card, typically could reach a 17% overclock at stock volts (both of mine do), and the GTX460 was similar in this regard. That's why they were such value cards. But there's nothing entirely new about this kind of overclocking headroom at stock volts - it's not reserved only for CPUs, as you suggested.
To clarify things, the point I was attempting to make was in reference to high end cards - the 580, 6970, 5870, and the like. Mid-range cards have traditionally overclocked better because there's plenty of thermal and power headroom to work with, which is consistent with Techspot's findings. In any case I've slightly edited the article to clarify this point.
I think people will be disappointed in the overclocking part of this article, namely that you didn't do any voltage adjustments. I think people were wanting to see where the sweet spot for voltage is (best overclock without going too high, how increased voltage affects heat and power), like you often do with CPUs.
On the flipside, I would have liked to see about undervolting. I saw someone mention that they had dropped voltage and were able to maintain clocks which cut the power consumption by a fair margin with no loss in performance.
Considering that this is a reference card, I consider overclocking without voltage adjustment to be far more important. The 7970 is not an overengineered card like the 6990/5970 that was specifically built to be overvolted. It should be possible to give it some more voltage, but given the lack of design headroom in the power circuitry and the cooler, what you can achieve on stock voltage is much more important since it's all "free" performance.
Ryan - as usual, thanks so much for being responsive to feedback. And thanks for putting this article together - very informative. That PCIe scaling analysis will be referenced for years to come, in my opinion.
By the way, I agree that stock voltage overclocking is something worthy of being explored. It is a totally separate beast from overvolted overclocking, which not everyone has the skill or knowledge to do. The promise of higher performance and essentially no risk of hardware damage is truly a freebie, as you noted.
Yep Termie, now the hyper enthusiast experts with their 7970's are noobs unable to be skilled enough to overclock...
Can you amd fans get together sometime and agree on your massive fudges once and for all - we just heard no one but the highest of all gamers and end user experts buys these cards - with the intention of overclocking the 7970 to the hilt, as the expert in them demands the most performance for the price...
We just heard MONTHS of that crap - now it's the opposite....
Suddenly, the $579.00 amd fanboy buyers can't overclock...
How about this one- add this one to the arsenal of hogwash...
" Don't void your warranty !" by overclocking even the tiniest bit..
( We know every amd fanboy will blow the crap out of their card screwing around and every tip given around the forums is how to fake out the vendor, lie, and get a free replacement after doing so )
First, sorry for this response being several days later.
Fair enough. I didn't mean it as a real criticism just more of a nitpick. I realize the state of voltage control on video cards isn't exactly stellar and I'm sure AMD/nVidia aren't keen on you doing it.
Its certainly not as robust as CPU voltage adjustment is today, which I didn't mean to confuse as I understand there's a pretty significant disparity.
I sould have expanded my on my comment a bit more.I have a hunch AMD is being pretty conservative on voltage with these (in both directions, its higher than it needs to be, but its not as high as it could fairly safely be either). Firstly, probably to play it safe with the chips from the new process, but also I think they're giving themselves some breathing room for improvement. After 40nm, they probably didn't want to go for broke right out of the gate and leave some extra that they could push to improve as needed (they have space to release a 7980; something in line with the 4890). Considering the results, its not like they really need to, especially coupled with the rumored 28nm issues.
Oh, and likewise to Termie, I do still appreciate the work and realize you can't please everyone. I liked the update and actually I think you did enough to touch on the subject in the 7950 review (namely addressing the lack of quality software management for GPUs currently).
The Leo demo as mentioned in the article has been released (no idea about version): http://developer.amd.com/samples/demos/pages/AMDRa... Requires 7970 to run (not sure why exactly if it's just DirectX11/DirectCompute?).
It seemed like we've just finished seeing most major engines like Unreal Engine 3, FROSTBITE 2.0, CryEngine 3 transition to a deferred rendering model. Is it very difficult for developers to modify their existing/previous forward renderers to incorporate the new lighting technique used in the Leo Demo? Otherwise, given the investment developers have put into deferred rendering, I'm guessing they're not looking to transitioned back to an improved forward renderer anytime soon.
On a related note, you mentioned the lack of MSAA is a common problem to DX10+. Given this improved lighting technique requires compute shaders, is it actually DX11 GPU only, ie. does it require CS5.0 or can it be implemented in CS4.x to support DX10 GPUs? According to the latest Steam survey, by far the majority of GPUs are still DX10, so game developers won't be dropping support for them for a few years. Some games do support DX11 only features like tessellation, but I presume that having to implement 2 different rendering/lighting models is a lot more work, which could hinder adoption if the technique isn't compatible with DX10 GPUs.
No one has tested the 7970 in a crossfire configuration under PCI 3.0. I would expect increased bandwidth to benefit the most in that environment. I realize the 7800 series will be better candidates for crossfire given price, heat, and power consumption but a test with the 7900 series would show the potential.
I'm sorry, I might be pretty drunk, but I'm falling at the first page.
"PCIe Bandwidth..."
There's a clear difference between 8x and 16x PCIe 3.0
Even if it is small, it is there, showing some bottlenecking. If it was inside the margin of error, you'd expect they'd switch places. They didn't. There is clear bottlenecking.
I saw some stuff flying around about SMAA a month or two ago... seemed promising and a better alternative to FXAA, but I haven't seen much in the "official" media outlets about it.
It'd be nice to see some analysis on SMAA vs. FXAA vs. Morphological AA in an article covering the current state of AA.
If I remember correctly, TB provides the bandwidth of a PCIe 4x connection. So if a high end card like this isn't bottlenecked with that much constraint, it sure looks good for external graphics! You'd need a separate power plug of course, but it now looks feasible.
TB controllers have a PCIe 2.0 x4 back end, but the protocol adapter can only pump at 10Gbps, so Thunderbolt devices essentially share the equivalent of 2.5 lanes of PCIe 2.0. I was hoping that PCIe 3.0 x1 performance would be tested as well, since that would show bottlenecking very similar to what could be expected from a Thunderbolt connected GPU.
Is there word of Thunderbolt adaptation to the evolution of PCIe technology version?
The first release being using PCIe 2, are we going to see (with Ivy Bridge) TB using PCIe 3 with more than an effective doubling of bandwidth (since they reduced overhead with PCIe 3)?
All of the sudden we would end up closer to external graphics in docking stations (or directly with large high res displays) for ultra light laptops.
We'd see less than a doubling of band with if TB2.0 just went from PCIe2.0 to 3.0 clocks because TB already incorporates a high efficiency encoding like 3.0 does. That's why a TB1.0 connection can carry 2.5x PCIe 2.0 lanes of data over a channel that's raw capacity is only 2 lanes wide.
I don't really understand why dumn SSAA would be so hard to implement in a game-independent, API-independent, renderer-independent fashion. The driver can simply present a larger framebuffer to the game (say, 3840x2160 for a 1080p game) and as a final step before swapping the buffer, average the pixel values in 2x2 blocks, supersampling down to the target resolution.
I mean, this is how antialiasing used to work in the days before MSAA, and while there's a big performance penalty there, it has the virtue of working in any scenario, on any content or geometry.
So PCIe 4GB/s (2.0x8 or 3.0x4) is where high-end cards start dropping off and showing noticeable differences in performance. That is definitely going to be the big advantage IVB brings to the mainstream as you'll be able to get 8GB/s in an x8/x8 config with PCIe 3.0 cards.
It'd be interesting if you could do a comparison at some point on the impact of VRAM and bandwidth and PCIe bus speeds. An ideal candidate would be a card that has 2xVRAM variants like a GTX 580 or 6970 that's still fast enough to make things interesting.
Also interesting discussion on the MSAA situation. That helps explain why enabling MSAA has caused VRAM amounts to balloon incredibly in recent games, like BF3, Skyrim, Crysis etc. That extra G-buffer with all that geometry data. Is this what Nvidia was doing in the past with their AA override compatibility bits? Telling their driver to store intermediate buffers for MSAA? Also, wasn't DX10.1/11 supposed to help with this with the ability to read back the multisample depth buffer?
In any case, I for one welcome FXAA. While it does have a blurring effect, the AA it provides with virtually no loss in performance is amazing. It allows me to run much lower levels of AA (4xMSAA + 4xTSAA max, or even 2x+2x) in conjunction with FXAA to achieve better overall AA at the expense of slight blurring. MSAA+TSAA+FXAA provides similar full-scene AA results as the much more performance expensive SGSSAA for me.
LOL yeah cause with Nvidia's 780 coming out in a month Im gonna go blow a load of cash on a card that's only marginally faster than the 580... riiiight. Nvidia released a slide of performance for their 780 vs the 580... the 780 was more than twice as fast as the 580 in all the games they tested.... some almost 2.5X as fast. If the rumored specs are true it will have almost identical specs to the 590 only on a 28nm die in a single chip. This is why you never jump at the first offerings of a new generation of cards. Especially when, if youve been doing your research, you know both chips are being made at the same foundry and both taped out about the same time and that amd went with the lower power chips first instead of the high k metal gates like nvidia did. Now Nvidia is doing a hard launch not a paper launch the end of march. Way to jump the gun dude.
Rumors are nVidia's new flagship GPU will have 1024 cores; so a slightly more than 2x speedup seems reasonable once you factor the architectural tweaks in.
Microsoft Flight Simulator X is a "game" that is limited by a 16x PCIe 2.0 bus. (when not using the buggy DX10 preview). You would easily find this when you fly low over a forest with loads of autogenerated trees.
The additional pixels that must be rendered. The setup data (triangles, textures, etc) are constant, but the actual rendering workload scales with the resolution of the image to be rendered.
We need focus back on driver support, bug fixing and quality. Shouldn't have to wait 3 months for proper drivers for a card you bought or for a hot new game like BF3.
Thanks for measuring the PCIe bandwidth results--this is good data. If I may nitpick: please follow SI, and abbreviate seconds with "s" and volts with "V".
This information also makes me dreams of the unannounced xbox. Is this compatible with the current e-dram related hardware AA on the xbox360? Another question, could this possibly help a mythical gen gaming console to do more in hardware?
I like the visual of the demo, surface materials, lighting, and shadows all look natural and refreshing. It has the quality of offline renderer. It's much better than most games out there today, which all surfaces look alike, overly shiny surfaces, unnatural glow, and general blurriness. I know a lot has to do with hardware limitation of consoles and developers like to use excessive post process to hide it, but the visual is getting old.
The demo shows how software (AMD's new forward rendering technique) targets a specific hardware can produces stunning results.
How come you guys don't do stability testing. run the cards for 48 hours straight using folding@home and see what happens. Also do testing of ECC memory and how error prone the card is.
I understand the authors of anandtech hardly ever read comments section so if anyone knows how do I get my wish list to anandtech please let me know.
... I see that you have stated the consequences as to what happens to the PCIe lanes if you use a non-IVB processors but not, what happens if you have a pcie 2.0 compliant GPU and that too in SLI! Also an article on how the lane would be distributed under all scenarios like, pcie 2.0: Single GPU, dual/triple GPU; pcie 3.0: single/dual/triple GPUs!!
I am very excited about this new technique for calculating lighting in a forward renderer. Deferred MSAA is a disaster, and postAA gives mediocre results, so I really hope we are going to see a move back to forward rendering in the next iteration of engines, in 2-3 years.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
47 Comments
Back to Article
dac7nco - Friday, January 27, 2012 - link
I was wondering when we'd start seeing bandwidth restrictions from 2.0 x8; Looks like Ivy Bridge will be a better than anticipated upgrade for those Z68 boards with 3.0 slots.Daimon
OblivionLord - Friday, January 27, 2012 - link
You'll see the limitation on the current 5970, 590, 6990, Mars2 when used on 8x and 16x 2.0. You won't see any limitation on 3.0 8x and 16x with current cards.If I had to guess then I'd say that in 2 years the highend videocards at that time will be powerful enough to finally show a limitation on 16x 2.0 and 8x 3.0, but not 16x 3.0
dragonsqrrl - Saturday, January 28, 2012 - link
So you'll be limited by 16x 2.0, but not 8x 3.0? How does that work exactly?Revdarian - Saturday, January 28, 2012 - link
I think, and might be mistaken, that he refers to using those dual gpu cards on multiple card solutions.In that case, well yeah, 8x and 16x 2.0 would be halve the bandwidth of the same setup with 3.0 (this is for cuadruple gpu solutions, a niche market)
Revdarian - Saturday, January 28, 2012 - link
*half even... meh grammar nazi-ing my own post heheheTermie - Friday, January 27, 2012 - link
Just as a counterpoint, Techspot just did an article on overclocking, and found that several mid-range cards hit around 15-17% overclock (and I believe this is on stock voltage). Link: http://www.techspot.com/review/486-graphics-card-o...You may want to note that what's unique about the 7970 is not that it can get up to an 18% overclock on stock volts, but that it is a top-end card that has 18% headroom. The 6970 had nothing close to that as Techspot found, for instance, so with the more expensive 7970, the headroom should be factored into the cost equation - the price premium for top-end cards rarely come with this bonus.
As an example, the HD5850, which was introduced as a high mid-range card, typically could reach a 17% overclock at stock volts (both of mine do), and the GTX460 was similar in this regard. That's why they were such value cards. But there's nothing entirely new about this kind of overclocking headroom at stock volts - it's not reserved only for CPUs, as you suggested.
Ryan Smith - Friday, January 27, 2012 - link
To clarify things, the point I was attempting to make was in reference to high end cards - the 580, 6970, 5870, and the like. Mid-range cards have traditionally overclocked better because there's plenty of thermal and power headroom to work with, which is consistent with Techspot's findings. In any case I've slightly edited the article to clarify this point.darkswordsman17 - Friday, January 27, 2012 - link
I think people will be disappointed in the overclocking part of this article, namely that you didn't do any voltage adjustments. I think people were wanting to see where the sweet spot for voltage is (best overclock without going too high, how increased voltage affects heat and power), like you often do with CPUs.On the flipside, I would have liked to see about undervolting. I saw someone mention that they had dropped voltage and were able to maintain clocks which cut the power consumption by a fair margin with no loss in performance.
Ryan Smith - Friday, January 27, 2012 - link
Considering that this is a reference card, I consider overclocking without voltage adjustment to be far more important. The 7970 is not an overengineered card like the 6990/5970 that was specifically built to be overvolted. It should be possible to give it some more voltage, but given the lack of design headroom in the power circuitry and the cooler, what you can achieve on stock voltage is much more important since it's all "free" performance.Termie - Saturday, January 28, 2012 - link
Ryan - as usual, thanks so much for being responsive to feedback. And thanks for putting this article together - very informative. That PCIe scaling analysis will be referenced for years to come, in my opinion.By the way, I agree that stock voltage overclocking is something worthy of being explored. It is a totally separate beast from overvolted overclocking, which not everyone has the skill or knowledge to do. The promise of higher performance and essentially no risk of hardware damage is truly a freebie, as you noted.
CeriseCogburn - Saturday, June 23, 2012 - link
Yep Termie, now the hyper enthusiast experts with their 7970's are noobs unable to be skilled enough to overclock...Can you amd fans get together sometime and agree on your massive fudges once and for all - we just heard no one but the highest of all gamers and end user experts buys these cards - with the intention of overclocking the 7970 to the hilt, as the expert in them demands the most performance for the price...
We just heard MONTHS of that crap - now it's the opposite....
Suddenly, the $579.00 amd fanboy buyers can't overclock...
How about this one- add this one to the arsenal of hogwash...
" Don't void your warranty !" by overclocking even the tiniest bit..
( We know every amd fanboy will blow the crap out of their card screwing around and every tip given around the forums is how to fake out the vendor, lie, and get a free replacement after doing so )
darkswordsman17 - Tuesday, January 31, 2012 - link
First, sorry for this response being several days later.Fair enough. I didn't mean it as a real criticism just more of a nitpick. I realize the state of voltage control on video cards isn't exactly stellar and I'm sure AMD/nVidia aren't keen on you doing it.
Its certainly not as robust as CPU voltage adjustment is today, which I didn't mean to confuse as I understand there's a pretty significant disparity.
I sould have expanded my on my comment a bit more.I have a hunch AMD is being pretty conservative on voltage with these (in both directions, its higher than it needs to be, but its not as high as it could fairly safely be either). Firstly, probably to play it safe with the chips from the new process, but also I think they're giving themselves some breathing room for improvement. After 40nm, they probably didn't want to go for broke right out of the gate and leave some extra that they could push to improve as needed (they have space to release a 7980; something in line with the 4890). Considering the results, its not like they really need to, especially coupled with the rumored 28nm issues.
Oh, and likewise to Termie, I do still appreciate the work and realize you can't please everyone. I liked the update and actually I think you did enough to touch on the subject in the 7950 review (namely addressing the lack of quality software management for GPUs currently).
mczak - Friday, January 27, 2012 - link
The Leo demo as mentioned in the article has been released (no idea about version):http://developer.amd.com/samples/demos/pages/AMDRa...
Requires 7970 to run (not sure why exactly if it's just DirectX11/DirectCompute?).
mczak - Friday, January 27, 2012 - link
Actually Dave Baumann clarified it should run on other hw as well.ltcommanderdata - Friday, January 27, 2012 - link
It seemed like we've just finished seeing most major engines like Unreal Engine 3, FROSTBITE 2.0, CryEngine 3 transition to a deferred rendering model. Is it very difficult for developers to modify their existing/previous forward renderers to incorporate the new lighting technique used in the Leo Demo? Otherwise, given the investment developers have put into deferred rendering, I'm guessing they're not looking to transitioned back to an improved forward renderer anytime soon.On a related note, you mentioned the lack of MSAA is a common problem to DX10+. Given this improved lighting technique requires compute shaders, is it actually DX11 GPU only, ie. does it require CS5.0 or can it be implemented in CS4.x to support DX10 GPUs? According to the latest Steam survey, by far the majority of GPUs are still DX10, so game developers won't be dropping support for them for a few years. Some games do support DX11 only features like tessellation, but I presume that having to implement 2 different rendering/lighting models is a lot more work, which could hinder adoption if the technique isn't compatible with DX10 GPUs.
Logsdonb - Friday, January 27, 2012 - link
No one has tested the 7970 in a crossfire configuration under PCI 3.0. I would expect increased bandwidth to benefit the most in that environment. I realize the 7800 series will be better candidates for crossfire given price, heat, and power consumption but a test with the 7900 series would show the potential.piroroadkill - Friday, January 27, 2012 - link
I'm sorry, I might be pretty drunk, but I'm falling at the first page."PCIe Bandwidth..."
There's a clear difference between 8x and 16x PCIe 3.0
Even if it is small, it is there, showing some bottlenecking. If it was inside the margin of error, you'd expect they'd switch places. They didn't. There is clear bottlenecking.
Concillian - Friday, January 27, 2012 - link
I saw some stuff flying around about SMAA a month or two ago... seemed promising and a better alternative to FXAA, but I haven't seen much in the "official" media outlets about it.It'd be nice to see some analysis on SMAA vs. FXAA vs. Morphological AA in an article covering the current state of AA.
Ryan Smith - Friday, January 27, 2012 - link
As I understand it, SMAA is still a work in progress. It would be premature to comment on it at this time.tipoo - Friday, January 27, 2012 - link
If I remember correctly, TB provides the bandwidth of a PCIe 4x connection. So if a high end card like this isn't bottlenecked with that much constraint, it sure looks good for external graphics! You'd need a separate power plug of course, but it now looks feasible.evilspoons - Friday, January 27, 2012 - link
MSI is working on that already :)http://www.anandtech.com/show/5352/msis-gus-ii-ext...
tipoo - Friday, January 27, 2012 - link
Yeah, that looks sweet...Now for non-Mac laptops to get Thunderbolt. I think some Sony's already have it, but Ivy Bridge laptops for sure.repoman27 - Saturday, January 28, 2012 - link
TB controllers have a PCIe 2.0 x4 back end, but the protocol adapter can only pump at 10Gbps, so Thunderbolt devices essentially share the equivalent of 2.5 lanes of PCIe 2.0. I was hoping that PCIe 3.0 x1 performance would be tested as well, since that would show bottlenecking very similar to what could be expected from a Thunderbolt connected GPU.Torrijos - Sunday, January 29, 2012 - link
I was wondering this too...Is there word of Thunderbolt adaptation to the evolution of PCIe technology version?
The first release being using PCIe 2, are we going to see (with Ivy Bridge) TB using PCIe 3 with more than an effective doubling of bandwidth (since they reduced overhead with PCIe 3)?
All of the sudden we would end up closer to external graphics in docking stations (or directly with large high res displays) for ultra light laptops.
DanNeely - Sunday, January 29, 2012 - link
We'd see less than a doubling of band with if TB2.0 just went from PCIe2.0 to 3.0 clocks because TB already incorporates a high efficiency encoding like 3.0 does. That's why a TB1.0 connection can carry 2.5x PCIe 2.0 lanes of data over a channel that's raw capacity is only 2 lanes wide.tynopik - Friday, January 27, 2012 - link
If your main conclusion is that x8 3.0 is plenty for crossfire, shouldn't, you know, ACTUALLY TEST crossfire at x8 3.0?bumble12 - Friday, January 27, 2012 - link
First sentence on the second paragraph of the first page:"Next week we’ll be taking a look at CrossFire performance and the performance of AMD’s first driver update. "
Guspaz - Friday, January 27, 2012 - link
I don't really understand why dumn SSAA would be so hard to implement in a game-independent, API-independent, renderer-independent fashion. The driver can simply present a larger framebuffer to the game (say, 3840x2160 for a 1080p game) and as a final step before swapping the buffer, average the pixel values in 2x2 blocks, supersampling down to the target resolution.I mean, this is how antialiasing used to work in the days before MSAA, and while there's a big performance penalty there, it has the virtue of working in any scenario, on any content or geometry.
ItsDerekDude - Friday, January 27, 2012 - link
Here it is!http://demo.ovh.com/download/37a53453c137425e584a1...
chizow - Friday, January 27, 2012 - link
So PCIe 4GB/s (2.0x8 or 3.0x4) is where high-end cards start dropping off and showing noticeable differences in performance. That is definitely going to be the big advantage IVB brings to the mainstream as you'll be able to get 8GB/s in an x8/x8 config with PCIe 3.0 cards.It'd be interesting if you could do a comparison at some point on the impact of VRAM and bandwidth and PCIe bus speeds. An ideal candidate would be a card that has 2xVRAM variants like a GTX 580 or 6970 that's still fast enough to make things interesting.
Also interesting discussion on the MSAA situation. That helps explain why enabling MSAA has caused VRAM amounts to balloon incredibly in recent games, like BF3, Skyrim, Crysis etc. That extra G-buffer with all that geometry data. Is this what Nvidia was doing in the past with their AA override compatibility bits? Telling their driver to store intermediate buffers for MSAA? Also, wasn't DX10.1/11 supposed to help with this with the ability to read back the multisample depth buffer?
In any case, I for one welcome FXAA. While it does have a blurring effect, the AA it provides with virtually no loss in performance is amazing. It allows me to run much lower levels of AA (4xMSAA + 4xTSAA max, or even 2x+2x) in conjunction with FXAA to achieve better overall AA at the expense of slight blurring. MSAA+TSAA+FXAA provides similar full-scene AA results as the much more performance expensive SGSSAA for me.
shin0bi272 - Friday, January 27, 2012 - link
LOL yeah cause with Nvidia's 780 coming out in a month Im gonna go blow a load of cash on a card that's only marginally faster than the 580... riiiight. Nvidia released a slide of performance for their 780 vs the 580... the 780 was more than twice as fast as the 580 in all the games they tested.... some almost 2.5X as fast. If the rumored specs are true it will have almost identical specs to the 590 only on a 28nm die in a single chip. This is why you never jump at the first offerings of a new generation of cards. Especially when, if youve been doing your research, you know both chips are being made at the same foundry and both taped out about the same time and that amd went with the lower power chips first instead of the high k metal gates like nvidia did. Now Nvidia is doing a hard launch not a paper launch the end of march. Way to jump the gun dude.mak360 - Saturday, January 28, 2012 - link
And then you woke up!!DanNeely - Sunday, January 29, 2012 - link
Rumors are nVidia's new flagship GPU will have 1024 cores; so a slightly more than 2x speedup seems reasonable once you factor the architectural tweaks in.Sabresiberian - Saturday, January 28, 2012 - link
He, yeah, riiiight!SAAB_340 - Saturday, January 28, 2012 - link
Microsoft Flight Simulator X is a "game" that is limited by a 16x PCIe 2.0 bus. (when not using the buggy DX10 preview). You would easily find this when you fly low over a forest with loads of autogenerated trees.marc1000 - Saturday, January 28, 2012 - link
What about the 7950? And 7870? Amd wil loose momentum if it takes too long to launch new cards.Sabresiberian - Saturday, January 28, 2012 - link
"For any given game the amount of data sent per frame is largely constant regardless of resolution, so we’ve opted to test everything at 1680x1050."What causes the higher resolutions to require significantly more GPU power then?
;)
Ryan Smith - Monday, January 30, 2012 - link
The additional pixels that must be rendered. The setup data (triangles, textures, etc) are constant, but the actual rendering workload scales with the resolution of the image to be rendered.AstroGuardian - Sunday, January 29, 2012 - link
Yesterday you scared the crap out of me when Anandtech was unavailable. I thought it was another victim of the damn SOPA and stuff....Damn glad to see you guys again :)
fausto412 - Sunday, January 29, 2012 - link
We need focus back on driver support, bug fixing and quality. Shouldn't have to wait 3 months for proper drivers for a card you bought or for a hot new game like BF3.annnonymouscoward - Sunday, January 29, 2012 - link
Thanks for measuring the PCIe bandwidth results--this is good data. If I may nitpick: please follow SI, and abbreviate seconds with "s" and volts with "V".MGSsancho - Monday, January 30, 2012 - link
This information also makes me dreams of the unannounced xbox. Is this compatible with the current e-dram related hardware AA on the xbox360? Another question, could this possibly help a mythical gen gaming console to do more in hardware?Th-z - Monday, January 30, 2012 - link
I like the visual of the demo, surface materials, lighting, and shadows all look natural and refreshing. It has the quality of offline renderer. It's much better than most games out there today, which all surfaces look alike, overly shiny surfaces, unnatural glow, and general blurriness. I know a lot has to do with hardware limitation of consoles and developers like to use excessive post process to hide it, but the visual is getting old.The demo shows how software (AMD's new forward rendering technique) targets a specific hardware can produces stunning results.
MySchizoBuddy - Monday, March 5, 2012 - link
How come you guys don't do stability testing. run the cards for 48 hours straight using folding@home and see what happens. Also do testing of ECC memory and how error prone the card is.I understand the authors of anandtech hardly ever read comments section so if anyone knows how do I get my wish list to anandtech please let me know.
N1bble - Tuesday, March 13, 2012 - link
I've got an Asus p5k pro motherboard with PCIe 1.0 or 1.1. (GPU-Z says 1.1, manual says simply PCI-e x16)Does this test prove that the cards will work on PCIe 1.x or could there be other issues since PCIe 3.0 isn't fully compatible to 1.x?
mudy - Monday, April 23, 2012 - link
... I see that you have stated the consequences as to what happens to the PCIe lanes if you use a non-IVB processors but not, what happens if you have a pcie 2.0 compliant GPU and that too in SLI! Also an article on how the lane would be distributed under all scenarios like, pcie 2.0: Single GPU, dual/triple GPU; pcie 3.0: single/dual/triple GPUs!!Harry Lloyd - Sunday, February 24, 2013 - link
I am very excited about this new technique for calculating lighting in a forward renderer. Deferred MSAA is a disaster, and postAA gives mediocre results, so I really hope we are going to see a move back to forward rendering in the next iteration of engines, in 2-3 years.