NVIDIA GeForce GTX 680M: Kepler GK104 Goes Mobileby Jarred Walton on June 4, 2012 9:12 PM EST
Origin PC spoiled the GTX 680M launch party a bit with their announcement of their new EON15-S and EON17-S notebooks this morning, but NVIDIA asked us to avoid discussing the particulars of the new mobile GPU juggernaut until the official NDA time. As you’ve probably guessed, that time is now (or 6PM PDT June 4, 2012 if you’re reading this later). NVIDIA also shared some information on upcoming Ultrabooks, which we’ll get to at the end.
NVIDIA has had their fair share of success with Kepler so far, and the GTX 680 desktop cards continue to sell out. Newegg for example currently lists 18 GTX 680 cards, but only one is currently in stock: the EVGA GTX 680 FTW comes with a decent overclock and a starting price $70 higher than the standard GTX 680. On the laptop side, we’ve already had a couple Kepler-based GK107 laptops in for review already, and graphics performance has shown a large improvement relative to the previous Fermi midrange cards.
On the high-end notebooks, so far the only Kepler GPU has been a higher clocked GK107, the GTX 660M, but increasing the core clocks will only take you so far. NVIDIA has continued to sell their previous generation GTX 570M and 580M as the GTX 670M and 675M (with a slight increase in core clocks), but clearly there was a hole at the top just waiting for the GTX 680M, and it’s now time to plug it. Below is a rundown of the three of NVIDIA’s fastest mobile GPUs to help put the GTX 680M in perspective.
|NVIDIA High-End Mobile GPU Specifications|
|GeForce GTX 680M||GeForce GTX 675M||GeForce GTX 660M|
|GPU and Process||28nm GK104||40nm GF114||28nm GK107|
|CUDA Cores||1344||384||Up to 384|
|Memory Eff. Clock||3.6GHz||3GHz||4GHz|
|Memory||Up to 4GB GDDR5||Up to 2GB GDDR5||Up to 2GB GDDR5|
Just running the raw numbers here, the GTX 680M has up to 20% more memory bandwidth than GTX 675M/580M, thanks to the improved memory controller and higher RAM clocks available with Kepler. The bigger improvement however comes in the computational area: even factoring in the double-speed shader clocks, GTX 680M has potentially 103% more shader performance than its predecessor. NVIDIA gives an estimated performance improvement of up to 80% over GTX 580M, which is a huge jump in generational performance. And while Fermi on the desktop still offers potentially better performance in several compute workloads, there’s a reasonable chance that the gap won’t be quite as large on notebooks—not to mention compute generally isn’t as big of a factor for most notebook users. (And for those that need notebooks with more compute performance, there’s always the Quadro 5010M—likely to be supplemented by a new Quadro in the near future.)
Unfortunately, we’ll have to wait a bit longer to do our own in-house investigation of GeForce GTX 680M performance, as we don’t have any hardware in hand. NVIDIA did provide some performance benchmarks with a variety of games, though, and we’re going to pass along that information in the interim. As always, take such information with a grain of salt, as NVIDIA may be picking games/settings that are particularly well suited to the GTX 680M, but for many of the titles there’s a canned benchmark that should allow for “fair” comparisons.
Assuming the above chart uses the built-in benchmarks in the games that support it, we do have a few points of comparison with the Alienware M18x in GTX 580M and HD 6990M configurations. We’ll skip those, however, as the only game where we appear to run at identical settings is DiRT 3 (43.8FPS if you’re wondering). Luckily, NVIDIA has included similar performance tables in previous launches, so we do have some overlap with their GTX 580M information. First, here’s their full benchmarking page from 580M, and then we’ll summarize the points of comparison.
Tentative Gaming Performance Comparison
(Using NVIDIA GTX 580M/680M Results)
|Aliens vs. Predator||59.7||39||53%|
|Far Cry 2||115.6||79||46%|
|Lost Planet 2||57.9||33||75%|
|Stalker: Call of Pripyat||96.4||50||93%|
|StoneGiant (DoF Off)||67||46||46%|
|StoneGiant (DoF On)||36||25||44%|
|Street Fighter IV||165.5||138||20%|
|Total War: Shogun 2||97.8||59||66%|
|Witcher 2 High||43.7||26||68%|
|Witcher 2 Ultra||20.1||10||101%|
Even given the discrepancies between test notebooks (Clevo’s X7200 with an i7-980X compared to the i7-3720QM), given both chips have the same maximum Turbo Boost clocks (3.6GHz on i7-980X and i7-3720QM) plus the fact that we should be GPU limited and the above scores look pretty reasonable. The only games that don’t see a >40% increase are Civilization V (which has proven to be CPU limited in the past) and Street Fighter IV (which is running at >120FPS on both GPUs already). There are a few titles where we even see nearly a doubling of performance. We don’t have raw numbers, but NVIDIA is also claiming around a 15-20% average performance advantage over AMD’s Radeon 7970M—hopefully we’ll be able to do our own head-to-head in the near future.
Overall, using NVIDIA’s own numbers it looks like GTX 680M ought to be around 50% faster than GTX 580M. If that doesn’t seem like much, consider that the difference between GTX 480M and GTX 580M was only around 20% (according to NVIDIA and using 3DMark11). A 50% increase in mobile graphics performance within the same power envelope is a huge step; if Kepler manages to reduce power use at all then it will be an even bigger jump. Put another way, a single GTX 680M in the above games using NVIDIA’s own results ends up offering 86% of the performance of GTX 580M SLI, and it will definitely use a lot less power and have fewer headaches than mobile SLI.
As usual, NVIDIA had a wealth of other information to share about their product and software features, and with their latest drivers NVIDIA is adding a few new items. No, we’re not even talking about CUDA or PhysX here (though NVIDIA does at least list those as important features). Optimus also gets a plug, and just as with the 400M and 500M series, all 600M GPUs support Optimus. The difference is that this time around, instead of just Alienware supporting Optimus with their M17x R3, NVIDIA also has MSI and Clevo on board for GTX 680M Optimus.
Briefly covering the other features, Kepler adds TXAA support (Time based anti-aliasing), a frame based anti-aliasing algorithm that NVIDIA touts as providing quality near the level of 8xMSAA but with a performance hit similar to that of 2xMSAA—or alternately, even better quality for a performance hit similar to 4XMSAA. It sounds like TXAA for now will require application support, and NVIDIA provided the above slide showing some of the upcoming titles that will have native TXAA built into the game. NVIDIA also made mention of FXAA (sample based anti-aliasing), which is a full scene shader technique that can help remove jaggies with a very minor performance hit (around 4%). New with their latest drivers is the ability to force-enable FXAA on all games.
Another newer addition is Adaptive V-Sync, which sounds similar in some ways to Lucid’s Virtu MVP solution. In practice, however, it sounds like NVIDIA is simply enabling/disabling V-Sync based on the current frame rate. If a game is running at more than 60FPS, V-Sync will turn on to prevent tearing, while at <60FPS V-Sync will turn off to improve performance and reduce stuttering.
Besides GTX 680M, there should be quite a few Ultrabook announcements coming out at Computex with support for NVIDIA GPUs. We’ve already looked at Acer’s TimelineU M3, and we mentioned ASUS’ UX32A/UX32VD and Lenovo’s new U410. Ultrabooks are quickly reaching the point where they’re “fast enough” for the vast majority of users; the one area where they appear deficient is in graphics performance. Ivy Bridge and HD 4000 in a ULV chip simply aren’t able to provide the same sort of performance we find in the higher TDP chips.
That’s where NVIDIA plans on getting a lot of wins with their GT 610M (48 core Fermi) and their GT 620M (96 core Fermi); GT 620M will initially be available as a 40nm and 28nm part, but we're still trying to find out if GT 610M will also have a 28nm variant. For larger laptops, GT 610M wouldn’t make much sense, but in an Ultrabook it may be just what you need. If so, keep your eyes on our Computex 2012 and Ultrabook coverage, as there’s surely more to come.
Post Your CommentPlease log in or sign up to comment.
View All Comments
MrSpadge - Tuesday, June 5, 2012 - linkI don't think the G610M is worth it for Ultrabooks. Bypassing the 17 W TDP limit for the CPU this way may sound nice - but the GPU will also consume power. I'd rather have an Ultrabook which lets me up the CPU to 25 W TDP - the cooling system would need to be able to handle this anyway, if it supports a 17 W CPU and a G610M.
Thereby I could occasionally get higher pure CPU performance (higher Turbo modes) and HD4000 should be significantly faster than G610M anyway, even with a hit due to TDP-constrained lower clocks.
JarredWalton - Tuesday, June 5, 2012 - linkThe GPU only consumes power when in use, thanks to Optimus, and presumably it will be on a separate cooling apparatus. I do however agree that in general, if you're at the point where you're looking to add a discrete GPU, you should consider laptops that are a bit larger with SV CPUs.
bennyg - Thursday, June 14, 2012 - linkDon't buy something tagged "ultrabook" then, instead look at the almost-an-ultrabook group of notebooks. Clevo 11" comes to mind if you want performance.
The real points why you'd bother with a low power dGPU that's barely faster than the HD4000
colonelclaw - Tuesday, June 5, 2012 - linkWhat would the equivalent desktop GPU be to the 680M when it comes to FPS? I know there's a lot of architectural differences between laptops and desktops, but I'd be interested to know which GPUs have the same real-world performance.
JarredWalton - Tuesday, June 5, 2012 - linkRight now, the closest you can get is the GTX 670, which is clocked at 915MHz stock on the core and has 6GHz GDDR5. So the mobile variant is clocked 32% lower on the core and 40% lower on the RAM.
Riek - Tuesday, June 5, 2012 - linkAssuming turboboost is similar on both models ofcourse.
colonelclaw - Thursday, June 7, 2012 - linkThanks for the reply Jarred
Batmeat - Tuesday, June 5, 2012 - linkSo can I swap my 570m our of my MSI gaming notebook and swap in a 680M?
JarredWalton - Tuesday, June 5, 2012 - linkPossibly.
bobburn - Tuesday, June 5, 2012 - linkIs pure hogwash from the reviews I've seen from the 7970. For Crysis 2, Diablo III, Arkham Asylum, Skyrim, and a plethora of other titles, the 7970 either beats the 680m (by as much as 18%) or is the same, and at worst was just 10% slower in the worst case. Yes, in some games the 680m beats the 7970, however, it certainly isn't worth paying more than double what you'd pay for the 7970.