The staggered birth of Kaveri has been an interesting story to cover but it has been difficult to keep all the pieces right in the forefront of memory. The initial launch in January 2014 saw a small number of SKUs such as the A10-7850K and the A8-7600 at first and since then we have had a small trickle at a rate of one or two new models a quarter hitting the shelves. We've seen 65W SKUs, such as in the form of the A10-7800, which offer 45W modes as well. Today we're reviewing the most recent Kaveri processor to hit the market, the A8-7650K rated at 95W and officially priced at $105/$95.

AMDs APU Strategy

Integrated graphics is one of the cornerstones of both the mobile and the desktop space. Despite the love we might harbor for a fully discrete graphics solution, the truth of the matter is that most people and most places still have that integrated platform in both consumer and business. Whenever I meet with AMD, the question from them is always simple - when you build a system, what would you get from AMD/Intel at a similar price point? The APU series tackles the sub-$200 price bracket from head to toe:

CPU/APU Comparion
AMD Kaveri Amazon Price on 5/12
 
Intel Haswell
    $236
 
i5-4690K
(4C/4T, 88W)
3.5-3.9 GHz
HD 4600
    $199 i5-4590
(4C/4T, 84W)
3.3-3.7 GHz
HD 4600
    $189 i5-4460
(4C/4T, 84W)
3.2-3.4 GHz
HD 4600
3.7-4.0 GHz
512 SPs
A10-7850K
(2M/4T, 95W)
$140 i3-4330
(2C/4T, 54W)
3.5 GHz
HD 4600
3.5-3.9 GHz
512 SPs
A10-7800
(2M/4T, 65W)
$135    
3.4-3.8 GHz
384 SPs
A10-7700K
(2M/4T, 95W)
$120 i3-4130
(2C/4T, 54W)
3.4 GHz
HD 4400
3.3-3.8 GHz
384 SPs
A8-7650K
(2M/4T, 95W)
$104    
3.1-3.8 GHz
384 SPs
A8-7600
(2M/4T, 65W)
$96 Pentium G3430
(2C/2T, 53W)
3.3 GHz
HD (Haswell)
3.7-4.0 GHz
No IGP
X4 860K
(2M/4T, 95W)
$83    
    $70 Pentium G3258
(2C/2T, 53W)
3.2 GHz
HD (Haswell)
3.5-3.9 GHz
256 SPs
A6-7400K
(1M/2T, 65W)
$64 Celeron G1830
(2C/2T, 53W)
2.8 GHz
HD (Haswell)

I first created this table with launch pricing, and it had some of the APUs/CPUs moved around. But since the release dates of these processors varies on both sides, the prices of individual SKUs has been adjusted to compete.  Perhaps appropriately, we get a number of direct matchups including the A10-7700K and the Core i3-4130 at $120 right now. This table is by no means complete, due to Intel’s 20+ other SKUs that fight around same price points but vary slightly in frequency, but that tells a lot about each sides attack on the market. Some of AMD's recently announced price cuts are here, but for consistency our results tables will list the launch pricing as we have no mechanism for dynamic pricing.

Testing AMDs APUs over the years has provided results that these are not necessarily targeted to the high end when it comes to multi-GPU systems that total $2000+, although AMD wouldn't mind if you built a high end system with one. The key element to the APU has always been the integrated graphics, and the ability to offer more performance or percentage of transistors to graphics than the competition does at various price points (irrespective of TDP). Ultimately AMD likes to promote that for a similarly priced Intel+NVIDIA solution, a user can enable dual graphics with an APU+R7 discrete card for better performance. That being said, the high-end APUs have also historically been considered when it comes to single discrete GPU gaming when the most expensive thing in the system is the GPU as we showed in our last gaming CPU roundup, although we need to test for a new one of those soon.

Part of the new set of tests for this review is to highlight the usefulness of dual graphics, as well as comparing both AMD and NVIDIA graphics for low, mild-mannered and high end gaming arrangements.

The A8-7650K

The new APU fits in the stack between the 65W A8-7600 and before we get into the A10 models with the A10-7700K. It offers a slightly reduced clock speed than the A10, but it is built (in part) for overclocking with the K moniker. The integrated graphics under the hood provide 384 SPs at 720 MHz, being part of AMDs 4+6 compute core strategy. The A8-7650K is designed to fill out the processor stack to that end.

AMD Kaveri Lineup
  A10-
7850K
A10-
7800
A10-
7700K
A8-
7650K
A8-
7600
 X4
860K
A6-
7400K
Price $140 $135 $120 $104 $96 $83 $64
Modules 2 2 2 2 2 2 1
Threads 4 4 4 4 4 4 2
Core Freq. (GHz) 3.7-4.0 3.5-3.9 3.4-3.8 3.3-3.8 3.1-3.8 3.7-4.0 3.5-3.9
Compute Units 4+8 4+8 4+6 4+6 4+6 4+0 2+4
Streaming
Processors
512 512 384 384 384 N/A 256
IGP Freq. (MHz) 720 720 720 720 720 N/A 756
TDP 95W 65W 95W 95W 65W 95W 65W
DRAM
Frequency
2133 2133 2133 2133 2133 1866 1866
L2 Cache 2x2MB 2x2MB 2x2MB 2x2MB 2x2MB 2x2MB 1MB

At a list price of $105 (current $104), we were at a quandary with what to test against it from team blue. The Pentium G3258 sits at $72 with two cores at 3.2 GHz and HD (Haswell) GT1 graphics. The next one up the stack is the i3-4130, a dual core with hyperthreading and HD4400, but sits at $120. Ultimately there is no direct price competitor, but AMD assured us they were confident in the positing of the SKUs, particularly when gaming is concerned. Due to what I have in my testing lab, the nearest competitor to this is the i3-4330, a model with a larger L3 cache which has a list price of $138, or the i3-4130T which is a low power SKU.

New Testing Methodology
Comments Locked

177 Comments

View All Comments

  • Gigaplex - Tuesday, May 12, 2015 - link

    Mantle for AMD discrete GPUs runs on Intel CPUs so is a completely valid test for CPU gaming performance.
  • CPUGPUGURU - Tuesday, May 12, 2015 - link

    Mantle is developed as AMD GCN API so don't go telling us its optimized for Intel or Nvidia because its NOT! Mantle is DOA, dead and buried, stop pumping a Zombie API.
  • silverblue - Wednesday, May 13, 2015 - link

    You've misread Gigaplex's comment, which was stating that you can run an AMD dGPU on any CPU and still use Mantle. It wasn't about using Mantle on Intel iGPUs or NVIDIA dGPUs, because we know that functionality was never enabled.

    Mantle isn't "dead and buried"; sure, it may not appear in many more games, but considering it's at the very core of Vulkan... though that could be just splitting hairs.
  • TheJian - Friday, May 15, 2015 - link

    Incorrect. The core of Mantle sales pitches was HLSL. You only think Mantle is Vulkan because you read Mantle/Vulkan articles on Anandtech...LOL. Read PCPER's take on it, and understand how VASTLY different Vulkan (Headed by Nvidia's Neil Trevett, who also came up with OpenGL ES BTW) is from Mantle. At best AMD ends up equal here, and worst Nvidia has an inside track always with the president of Khronus being the head of Nvidia's mobile team too. That's pretty much like Bapco being written by Intel software engineers and living on Intel Land across the street from Intel itself...ROFL. See Van Smith Articles on Bapco/sysmark etc and why tomshardware SHAMEFULLY dismissed him and removed his name from his articles ages ago

    Anandtech seems to follow this same path of favoritism for AMD these days since 660ti article - having AMD portal etc no Nvidia portal - mantle lovefest articles etc, same reason I left toms years ago circa 2001 or so. It's not the same team at tomshardware now, but the damage done then is still in many minds today (and shown at times in forum posts etc). Anandtech would be wise to change course, but Anand isn't running things now, and doesn't even own them today. I'd guess stock investors in the company that bought anandtech probably hold massive shares in sinking AMD ;) But that's just a guess.

    http://www.pcper.com/reviews/General-Tech/GDC-15-W...
    Real scoop on Vulkan. A few bits of code don't make Vulkan Mantle...LOL. If it was based on HLSL completely you might be able to have a valid argument but that is far from the case here. It MIGHT be splitting hairs if this was IN, but it's NOT.

    http://www.pcper.com/category/tags/glnext
    The articles on glNext.:
    "Vulkan is obviously different than Mantle in significant ways now, such as its use of SPIR-V for its shading language (rather than HLSL)."
    CORE? LOL. Core of Vulkan would be HLSL and not all the major changes due to the GROUP effort now.

    Trevett:
    "Being able to start with the Mantle design definitely helped us get rolling quickly – but there has been a lot of design iteration, not the least making sure that Vulkan can run across many different GPU architectures. Vulkan is definitely a working group design now."

    Everything that was AMD specific is basically gone as is the case with DX12 (mantle ideas, but not direct usage). Hence NV showing victories in AMD's own mantle showcase now (starswarm)...ROFL. How bad is that? Worse NV was chosen for DX12 Forza Demo which is an AMD console game. Why didn't MS chose AMD?

    They should have spent the time they wasted on Mantle making DX12/Vulkan driver advances, not to mention DX11 driver improvements which affect everything on the market now and probably for a while into the future (until win10 takes over at least if ever if vulkan is on billions of everything else first), rather than a few mantle games. Nvidia addressed the entire market with their R&D while AMD wasted it on Mantle, consoles & apu. The downfall of AMD started with a really bad ATI price and has been killing them since then.
  • TheJian - Friday, May 15, 2015 - link

    Mantle is almost useless for FAST cpus and is dead now (wasted R&D). It was meant to help AMD weak cpus which only needed to happen because they let guys like Dirk Meyer (who in 2011 said it was a mistake to spend on anything but CORE cpu/gpu, NOT APU), & Keller go ages ago. Adding Papermaster might make up for missing Meyer though. IF they would NOT have made these mistakes, we wouldn't even have needed Mantle because they'd still be in the cpu race with much higher IPC as we see with ZEN. You have no pricing power in APU as it feeds poor people and is being crushed by ARM coming up and Intel going down to stop them. GAMERS (and power users) will PAY a premium for stuff like Intel and Nvidia & AMD ignored engineers who tried to explain this to management. It is sad they're now hiring them back to create again what they never should have left to begin with. The last time they made money for the year was Athlon's and high IPC. Going into consoles instead of spending on CORE products was a mistake too. Which is why Nvidia said they ignored it. We see they were 100% correct as consoles have made amd nothing and lost the CPU & GPU race while dropping R&D on both screwing the future too. The years spent on this crap caused AMD's current problems for 3yrs on cpu/gpu having zero pricing power, selling off fabs, land, laying off 1/3 of employees etc. You can't make a profit on low margin junk without having massive share. Now if AMD had negotiated 20%+ margins from the get-go on consoles, maybe they'd have made money over the long haul. But as it stands now they may not even recover R&D and time wasted as mobile kills consoles at 1/2 through their life with die shrinks+revving yearly, far cheaper games and massive numbers sold yearly that is drawing devs away from consoles.

    Even now with 300's coming (and only top few cards are NOT rebadges which will just confuse users and piss them off probably), Nvidia just releases a faster rehash of tech waiting to answer and again keep a great product down in pricing. AMD will make nothing from 300's. IF they had ignored consoles/apus they would have ZEN out already (2yrs ago? maybe 3?) and 300's would have been made on 28nm optimized possibly like maxwell squeezed out more perf on the same process 6 months ago. Instead NV has had nearly a year to just pile up profits on an old process and have an answer waiting in the wings (980ti) to make sure AMD's new gpu has no pricing power.

    Going HBM when it isn't bandwidth starved is another snafu that will keep costs higher, especially with low yields on that and the new process. But again because of lack of R&D (after blowing it on consoles/apu), they needed HBM to help drop the wattage instead of having a great 28nm low watt alternative like maxwell that can still milk a very cheap old DDR5 product which has more than enough bandwidth as speeds keep increasing. HBM is needed at some point, just not today for a company needing pofits that has no cash to burn on low yields etc. They keep making mistakes and then having to make bad decisions to make up for them that stifle much needed profits. They also need to follow Nvidia in splitting fp32 from fp64 as that will further cement NV gpus if they don't. When you are a professional at both things instead of a jack of all trades loser in both, you win in perf and can price accordingly while keeping die size appropriate for both.

    Intel hopefully will be forced back to this due to ZEN also on the cpu side. Zen will cause Intel to have to respond because they won't be able to shrink their way to keeping the gpu (not with fabs catching Intel fabs) and beat AMD with a die fully dedicated to CPU and IPC. Thank god too, I've been saying AMD needed to do this for ages and without doing it would never put out another athlon that would win for 2-3yrs. I'm not even sure Zen can do this but at least it's a step in the right direction for profits. Fortunately for AMD an opening has been created by Intel massively chasing ARM and ignoring cpu enthusiasts and desktop pros. We have been getting crap on cpu side since AMD exited, while Intel just piled on gpu side which again hurt any shot of AMD making profits here...LOL. They don't seem to understand they make moves that screw themselves longer term. Short term thinking kills you.
  • ToTTenTranz - Wednesday, May 13, 2015 - link

    Yes, and the APU being reviewed, the A8-7650K also happens to be "AMD ONLY", so why not test mantle? There's a reasonable number of high-profile games that support it:

    - Battlefield 4 and Hardline
    - Dragon Age: Inquisition
    - Civilization: Beyond Earth
    - Sniper Elite III

    Plus another bunch coming up, like Star Wars Battlefront and Mirror's Edge.

    So why would it hurt so much to show at least one of these games running Mantle with a low-specced CPU like this?

    What is anandtech so afraid to show, by refusing to test Mantle comparisons with anything other than >$400 CPUs?
  • V900 - Thursday, May 14, 2015 - link

    There isn't anyth to be scared off, but Mantle is only available on a handful of games, and beyond those it's dead and buried.

    Anandtech doesn't run Mantle benchmarks for the same reason they don't review AGP graphics cards: It's a dead technology aside from the few people who currently use it...
  • chizow - Tuesday, May 12, 2015 - link

    I seriously considered an A10-7850K Kaveri build last year around this time for a small power-efficient HTPC to stream DVR'd shows from my NAS, but in the end a number of issues steered me away:

    1) Need for chassis, PSU, cooler.
    2) Lack of good mini-ITX options at launch.
    3) Not good enough graphics for gaming (not a primary consideration anyways, but something fast enough might've changed my usage patterns and expectations).

    Sadly, this was the closest I've gotten to buying an AMD CPU product in a long, long time but ultimately I went with an Intel NUC that was cheaper to build, smaller form factor, and much less power usage. And all I gave up was GPU performance that wasn't realistically good enough to change my usage patterns or expectations anyways.

    This is the problem AMD's APUs face in the marketplace today though. That's why I think AMD made a big mistake in betting their future on Fusion, people just aren't willing to trade fast efficient or top-of-the-line CPUs for a mediocre CPU/GPU combo.

    Today, there's even bigger challenges out there for AMD. You have Alienware that offers the Alpha with an i3 and GTX 860+M that absolutely destroys these APUs in every metric for $500, $400 on sale, and it takes care of everything from chassis, PSU, cooling, even Windows licensing. That's what AMD is facing now though in the low-end PC market, and I just can't see them competing with that kind of performance and value.
  • silverblue - Tuesday, May 12, 2015 - link

    I would have opted for the A8-7600 instead of the 7850K, though I do admit it was very difficult to source back then. 65W mode doesn't perform much faster than 45W mode. I suppose it's all about what you want from a machine in the end, and AMD don't make a faster CPU with weaker iGPU which might make more sense.

    The one thing stopping AMD from releasing a far superior product, in my eyes, was the requirement to at least try to extract as much performance from a flawed architecture so they could say it wasn't a complete waste of time.
  • galta - Tuesday, May 12, 2015 - link

    +1
    Fusion was not only poor strategy, it was poor implementation.
    Leaving aside the discussion of the merits integrated GPU, if AMD had done it right we would have seen Apple adopting their processor on their Macbook series, given their obsession with slim hardware, with no discrete graphics.
    Have we seen that? No.
    You see, even though Intel has never said that integrated GPU was the future, the single most important customer on that market segment was claimed by them.

Log in

Don't have an account? Sign up now