AMD's Radeon HD 5670: Sub-$100 DirectX 11 Starts Today
by Ryan Smith on January 14, 2010 12:00 AM EST- Posted in
- GPUs
Conclusion
With the performance and price of the 5670, AMD has put themselves into an interesting position, with some good things and some bad things coming from it.
From a product perspective, AMD has placed the 5670 against NVIDIA’s GT 240, and completely dominates the card at every last performance metric. Although the 8800 GT did a good job of already nullifying the GT 240, the 5670 finishes the job. In a product comparison it’s faster, cooler, and more future-proof since it supports DX11. NVIDIA can’t and in fact isn’t going to maintain the $99 price point with the GT 240, and as of this writing the average GT 240 price is closer to $80, effectively regulating it to another price bracket altogether. Ultimately this can’t be good for NVIDIA, since the Redwood GPU is smaller (and hence cheaper) to produce than the GT215 GPU at the heart of the GT 240.
Meanwhile compared to the 4670, AMD is pricing this appropriately ahead of a card that has slipped down to $70 and below. As the 4670’s successor the 5670 is much faster, cooler running, and sports a much better feature set, including audio bitstreaming. You’re going to have to pay for it however, so the 4670 still has a purpose in life, at least until the 5500 series gets here.
Then we have the well-established cards – NVIDIA’s 9800 GT and AMD’s Radeon 4850. The 9800 GT can be commonly found for $99 or less, while the 4850 comes in and out of stock around that price point. AMD is continuing to manufacture the 4850 (in spite of earlier reports that it was EOL'd), so while it’s hard to get it’s not discontinued like the 4770 was. Considering its availability and the fact that it hasn’t been EOL’d like we previously believed, I’m not going to write it off.
So where does that leave the 5670? The 5670 does surprisingly well against the 9800 GT. It wins in some cases, trails very slightly in a few more, and then outright loses only in games where the 5670 is already playable up to 1920x1200. From a performance standpoint I think the 9800 GT is ahead, but it’s not enough to matter; meanwhile the “green” 9800 GT shortens the gap even more, and it still is over 10W hotter than the 5670. The 5670 is a good enough replacement for the 9800 GT in that respect, plus it has support for DX11, Eyefinity, and 3D Blu-Ray when that launches later this year.
Then we have the 4850. The 4850 won’t last forever (at some point AMD will EOL it), but we can currently find a pair of them on Newegg for $99 each. In our existing games, the 4850 wins and it wins by a lot. While the 5670 clearly beats a GT 240 and is a good enough alternative to a 9800 GT, I can’t make a performance case against the 4850. The 4850 has more of everything, and that means it’s a much more capable card with today’s games.
AMD’s argument for this matter is that the 4850 is an older card and doesn’t support everything the 5670 does. This is true – forgoing the 5670 means you lose DX11, bitstreaming audio, and Eyefinity among other things. But while this and the much lower power draw make the 5670 a better HTPC card, I’m not sure this a convincing argument as a pure gaming card.
To prove a point, we benchmarked the 5670 on some DX11 games using what we’d consider to be reasonable “medium” settings. For Battleforge we used the default Medium settings with SSAO set to Very High (to take advantage of the use of ComputeShader 5.0 there), and for the STALKER benchmark we also used Medium settings with Tessellation and Contact Shadows enabled. These are settings we believe a $99 card should be good enough to play at, with DX11’s big features in use.
Radeon HD 5670 DirectX 11 Performance | ||
Battleforge DX11
|
STALKER DX11
|
|
Frames Per Second | 19.4 | 27.2 |
The fact of the matter is that neither game is playable at those settings; the 5670 is simply too slow. This is a test that would be better served with more DX11 benchmarks, but based on our limited sample we have to question whether the 5670 is fast enough for DX11 games. If it’s not (and these results agree with that perspective) then being future-proof can’t justify the lower performance. Until AMD retires the 4850 it’s going to be the better gaming card, so long as you can deal with the greater power requirements and the space requirements of the card.
There’s really no way to reconcile the fact that in the short-term the performance of cards at the $99 price point is going to get slower, so we won’t try to reconcile this. In an ideal world we’d like to go from a 4850 to a 5670 that has similar performance and all of the 5670’s other advantages, but that isn’t something that is going to happen until 5750 cards fall about $30. On the flip side at least it’s significantly better than the GT 240.
Ultimately, AMD has produced a solid card. It’s not the 5850 or the 5750 – cards which immediately turned their price brackets upside down – but it’s fast enough to avoid the fate of the GT 240 and has enough features to stand apart. It’s a good HTPC card, and by pushing a DX11 card out at $99, buyers can at least get a taste of what DX11 can do even if it’s not quite fast enough to run it full-time (not to mention it further propagates DX11, an incentive for developers). Pure gamers can do better for now, but in the end it’s a good enough card.
Stay tuned, as next month we’ll have a look at the 5500 series and the 5450, finishing off AMD’s Evergreen chip stack.
73 Comments
View All Comments
JarredWalton - Sunday, January 17, 2010 - link
As far as CPU multiplier, if you have the i7-720QM the normal multiplier is 12X (133 bus * 12 = 1.6GHz). For the i7-820QM the stock multiplier is 13 (1.73GHz). Maximum Turbo mode on the 720QM is 2.80GHz, so you could potentially see a 21X multiplier, while on the 820QM the maximum Turbo is 3.066GHz so you'd see up to a 23X multiplier. I don't know if throttle stop tells you max and min multipliers or not, but you could even run CPU-Z and just watch to see if the multiplier is changing a lot.SlyNine - Tuesday, January 19, 2010 - link
Yea I have been watching a few programs including throttle stop, Realtemp and Realtemp GT, including I7 turbo. They all show the max multiplier at 7-9 when gaming under load, even with an external monitor hooked up and this screen off it doesn't go past 10. Its worth noting that with the screen brightness turned down and CPU load only they stay at 12, but turn the brightness up and your multi falls to 8.The biggest problem is the clock modulation, which I'm trying to test. But it definitely correlates with real world performance, while task manager may show the CPU at 100%, throttle stop reports a 75% reduction in CPU usage. This also correlates with the delta that taskmanager indicates CPU usage at and what programs like I7turbo and real temp show the C0 state percent. Task manager will show 100% while the C0% will be at 25%, indicating a 75% reduction while under load.
Perhaps throttle stop just measures the difference between the CO% and what the OS reports.
I've custom set all the settings in the advanced power options to be the same on and off battery. When you unplug the system runs a great deal faster, albeit at the risk of harming the battery. I've disabled speed step as well with no difference.
Excel isn't my strong suit(basically I'm going to have to relearn how to use it) but I'm trying to correlate frame rate with the indicated clock modulation. But I'm unsure how to record a timeline of FPS. It does appear though that the FPS do report accurately when the clock modulation kicks in.
satish2685 - Monday, April 1, 2013 - link
Hi, I would like to purchase an Entry level 1GB DDR3 Asus Geforce HD5450 Graphics Card, but considering the power requirements, i only have an 250W PSU. Is it ok to buy a graphic card that requires a minimum of 400w and connect it to my existing MB or do i need to upgrade my PSU?? Advice required. If so any consequences i could face in future ??