In 2014/2015, it took NVIDIA 6 months from the launch of the Maxwell 2 architecture to get GTX Titan X out the door. All things considered, that was a fast turnaround for a new architecture. However now that we’re the Pascal generation, it turns out NVIDIA is in the mood to set a speed record, and in more ways than one.

Announced this evening by Jen-Hsun Huang at an engagement at Stanford University is the NVIDIA Titan X, NVIDIA’s new flagship video card. Based on the company’s new GP102 GPU, it’s launching in less than two weeks, on August 2nd.

NVIDIA GPU Specification Comparison
  NVIDIA Titan X GTX 1080 GTX Titan X GTX Titan
CUDA Cores 3584 2560 3072 2688
Texture Units 224? 160 192 224
ROPs 96? 64 96 48
Core Clock 1417MHz 1607MHz 1000MHz 837MHz
Boost Clock 1531MHz 1733MHz 1075MHz 876MHz
Memory Clock 10Gbps GDDR5X 10Gbps GDDR5X 7Gbps GDDR5 6Gbps GDDR5
Memory Bus Width 384-bit 256-bit 384-bit 384-bit
FP64 1/32 1/32 1/32 1/3
FP16 (Native) 1/64 1/64 N/A N/A
INT8 4:1 ? ? ?
TDP 250W 180W 250W 250W
GPU GP102 GP104 GM200 GK110
Transistor Count 12B 7.2B 8B 7.1B
Die Size 471mm2 314mm2 601mm2 551mm2
Manufacturing Process TSMC 16nm TSMC 16nm TSMC 28nm TSMC 28nm
Launch Date 08/02/2016 05/27/2016 03/17/2015 02/21/2013
Launch Price $1200 MSRP: $599
Founders $699
$999 $999

Let’s dive right into the numbers, shall we? The NVIDIA Titan X will be shipping with 3584 CUDA cores. Assuming that NVIDIA retains their GP104-style consumer architecture here – and there’s every reason to expect they will – then we’re looking at 28 SMs, or 40% more than GP104 and the GTX 1080.

It’s interesting to note here that 3584 CUDA cores happens to be the exact same number of CUDA cores also found in the Tesla P100 accelerator. These products are based on very different GPUs, but I bring this up because Tesla P100 did not use a fully enabled GP100 GPU; its GPU features 3840 CUDA cores in total. NVIDIA is not confirming the total number of CUDA cores in GP102 at this time, but if it’s meant to be a lightweight version of GP100, then this may not be a fully enabled card. This would also maintain the 3:2:1 ratio between GP102/104/106, as we saw with GM200/204/206.

On the clockspeed front, Titan X will be clocked at 1417MHz base and 1531MHz boost. This puts the total FP32 throughput at 11 TFLOPs (well, 10.97…), 24% higher than GTX 1080. In terms of expected performance, NVIDIA isn’t offering any comparisons to GTX 1080 at this time, but relative to the Maxwell 2 based GTX Titan X, they are talking about an up to 60% performance boost.

Feeding the beast that is GP102 is a 384-bit GDDR5X memory bus. NVIDIA will be running Titan X’s GDDR5X at the same 10Gbps as on GTX 1080, so we’re looking at a straight-up 50% increase in memory bus size and resulting memory bandwidth, bringing Titan X to 480GB/sec.

At this point in time there are a few unknowns about other specifications of the card. ROP count and texture unit count have not been disclosed (and this is something NVIDIA rarely posts on their site anyhow), but based on GP104 and GP106, I believe it’s safe to assume that we’re looking at 224 texture units and 96 ROPs respectively. To put this into numbers then, theoretical performance versus a GTX 1080 would be 24% more shading/texturing/geometry/compute performance, 50% more memory bandwidth, and 33% more ROP throughput. Or relative GTX Titan X (Maxwell 2), 56% more shading/texturing/geometry/compute performance, 43% more memory bandwidth, and 42% more ROP throughput. Of course, none of this takes into account any of Pascal’s architectural advantages such as a new delta color compression system.

Meanwhile like the past Titans, the new Titan X is a 250W card, putting it 70W (39%) above GTX 1080. In pictures released by NVIDIA and confirmed by their spec sheet, this will be powered by the typical 8-pin + 6-pin power connector setup. And speaking of pictures, the handful of pictures released so far confirm that the card will be following NVIDIA’s previous reference design, in the new GTX 1000 series triangular style. This means we’re looking at a blower based card – now clad in black for Titan X – using a vapor chamber setup like the GTX 1080 and past Titan cards.

The TDP difference between Titan X and GTX 1080 may also explain some of rationale behind the performance estimates above. In the Maxwel 2 generation, GTX Titan X (250W) consumed 85W more than GTX 980 (165W); but for the Pascal generation, NVIDIA only gets another 70W. As power is the ultimate factor limiting performance, it stands to reason that NVIDIA can't increase performance over GTX 1080 (in the form of CUDA cores and clockspeeds) by as much as they could over GTX 980. There is always the option to go above 250W - Tesla P100 in mezzanine form goes to 300 W - but for a PCIe form factor, 250W seems to be the sweet spot for NVIDIA.

Moving on, display I/O is listed as DisplayPort 1.4, HDMI 2.0b, and DL-DVI; NVIDIA doesn’t list the number of ports (and they aren’t visible in product photos), but I’d expect that it’s 3x DP, 1x HDMI, and 1x DL-DVI, just as with the past Titan X and GTX 1080.

From a marketing standpoint, it goes without saying that NVIDIA is pitching the Titan X as their new flagship card. What is interesting however is that it’s not being classified as a GeForce card, rather it’s the amorphous “NVIDIA Titan X”, being neither Quadro, Tesla, nor GeForce. Since the first card’s introduction in 2013, the GTX Titan series has always walked a fine line as a prosumer card, balanced between a relatively cheap compute card for workstations, and an uber gaming card for gaming PCs.

That NVIDIA has removed this card from the GeForce family would seem to further cement its place as a prosumer card. On the compute front the company is separately advertising the card's 44 TOPs INT8 compute performance - INT8 being frequently used for neural network inference - which is something they haven't done before for GeForce or Titan cards. Though make no mistake: the company’s GeForce division is marketing the card and it’s listed on, so it is still very much a gaming card as well.

As for pricing and availability, NVIDIA’s flagships have always been expensive, and NVIDIA Titan X even more so. The card will retail for $1200, $200 more than the previous GTX Titan X (Maxwell 2), and $500 more than the NVIDIA-built GTX 1080 Founders Edition. Given the overall higher prices for the GTX 1000 series, this isn’t something that surprises me, but none the less it means buying NVIDIA’s best card just got a bit more expensive. Meanwhile for distribution, making a departure from previous generations, the card is only being sold directly by NVIDIA through their website. The company’s board partners will not be distributing it, though system builders will still be able to include it.

Overall the announcement of this new Titan card, its specifications, and its timing raises a lot of questions. Does GP102 have fast FP64/FP16 hardware, or is it purely a larger GP104, finally formalizing the long-anticipated divide between HPC and consumer GPUs? Just how much smaller is GP102 versus GP100? How has NVIDIA been able to contract their launch window by so much for the Pascal generation, launching 3 GPUs in the span of 3 months? These are all good questions I hope we’ll get an answer to, and with an August 2nd launch it looks like we won’t be waiting too long.

Update 07/25: NVIDIA has given us a few answers to the question above. We have confirmation that the FP64 and FP16 rates are identical to GP104, which is to say very slow, and primarily there for compatibility/debug purposes. With the exception of INT8 support, this is a bigger GP104 throughout.

Meanwhile we have a die size for GP102: 471mm2, which is 139mm2 smaller than GP100. Given that both (presumably) have the same number of FP32 cores, the die space savings and implications are significant. This is as best of an example as we're ever going to get on the die space cost of the HPC features limited to GP100: NVLInk, fast FP64/FP16 support, larger register files, etc. By splitting HPC and graphics/inference into two GPUs, NVIDIA can produce GP102 at what should be a significantly lower price (and higher yield), something they couldn't do until the market for compute products based on GP100 was self-sustaining.

Finally, NVIDIA has clarified the branding a bit. Despite labeling it "the world’s ultimate graphics card," NVIDIA this morning has stated that the primary market is FP32 and INT8 compute, not gaming. Though gaming is certainly possible - and I fully expect they'll be happy to sell you $1200 gaming cards - the tables have essentially been flipped from the past Titan cards, where they were treated as gaming first and compute second. This of course opens the door to a proper GeForce branded GP102 card later on, possibly with neutered INT8 support to enforce the market segmentation.

Comments Locked


View All Comments

  • Impulses - Friday, July 22, 2016 - link

    Will it come out for half the price? I'm guessing it wouldn't happen this year but early next year at best; but considering the price, along with current 10x0 pricing, and AMD's schedule (or lack of competition at the high end)... It seems likely NV could get away with an $800 GTX 1080 Ti.

    I'd love to be wrong... I'm sitting on a pair of R9 290 right now (2x 6950 before that, largely NV cards before that). I feel like we're finally at a point where a single card would satisfy me for 4K/Surround resolutions, and this would be it, but I'm not feeling like paying over $750 for the privilege.

    Otherwise I might as well just stick to SLI/CF and go 2x GTX 1070.
  • nathanddrews - Friday, July 22, 2016 - link

    The pricing of the 1080Ti will completely depend upon the price and performance of Big Vega, I'm just making assumptions. Time will reveal all.
  • JeffFlanagan - Monday, July 25, 2016 - link

    For the price you could go with 2x 1080, but SLI comes with a lot of headaches
  • ptown16 - Friday, July 22, 2016 - link

    I've been drawing this comparison for a while as well. Nvidia dominates the high end performance area, are massively popular, highly priced, and use closed-end technology. AMD is much like the Android platform... more open-source, does not compete at the high end in terms of raw performance, and is not viewed as favorably by most people. I'm just holding off on a 1070 for now and hoping to see AMD offer something in the $300-$400 range to replace by 770, which has not aged well at all ever since Maxwell landed.
  • Lolimaster - Friday, July 22, 2016 - link

    1080 still struggles with many games at 4k even to sustain 30fps avrg.

    30% more performance is not going to change it.

    We need Vega10 and VULKAN.
  • otherwise - Monday, July 25, 2016 - link

    I don't think the markup is that obscene. This is a very expensive GPU, but a very cheap compute card. It's a hard line to walk in terms of pricing and marketing.
  • damianrobertjones - Monday, July 25, 2016 - link

    I bet that nVidia could have done 60+ fps last gen but... that wouldn't have brought in the $$$$$
  • TheJian - Tuesday, July 26, 2016 - link

    Except it's only AMD who has poor support for previous gen gpus (no money for dx11 etc) :(,2817,2458186,
    And I seem to remember radeon halo products going for $1500 (and an even steeper NV $3k, though that didn't last long...LOL - it was a price the market wouldn't bare). So both sides do this. As long as they are leaving the shelves FASTER than they can be made why should you set a price lower than whatever you can get? Business is not in business to be nice and make everyone happy (AMD should learn more from nvidia here, it's about making money fools), but rather to make a profit. This is simply supply and demand at work, and a company who is pretty good at figuring out what will help their bottom line. You seem to not understand that an M6000 goes for $5k at launch. The people who are unable to buy that but want to do content creation (games, 4k video etc) will see this as a massive discounted card. If you're struggling with the price of this card, you're not the target audience...LOL. These will fly off the shelves to the people who can't swallow a $5000 quadro and/or don't need 24GB. Many times before people have been seen buying 2-4...ROFL. To a prosumer, your version of obscene markup is downright obscene markdown. Your comment only makes sense if you're a PURE gamer with no other intentions, and even then it's still #1 and we all know what you pay on the top end. HEDT Intel chips for $1730 for instance.

    But yes, if you're gaming without needing the pro side perks, by all means wait for the card with GTX in the name (1080ti) and save $500, not 600. 1080ti will not be $600. It will be $650-750 depending on Vega most likely. No need to push down 1080 with no competition. They will cherry pick the crap out of this gpu for the next 4-5 months and launch 1080ti with faster clocks and 8-10GB and a stack of cards on the shelf on day 1. It would be plum crazy to put out a $600 1080ti if AMD takes until Dec to put out vega and HBM2 will drive up their price vs. a GDDR5x card for no gain in perf unfortunately (NV doesn't even need their current bandwidth).

    I really wish AMD had chose to go with GDDR5x for Vega. They got screwed by HBM1, and looks like they're going to do it again. IE too small of a market to lower cost, 4GB limit on HBM1, production more difficult than other mem leading to shortage etc. The only thing I see fixed this time is the 4GB limit. It doesn't even matter if your card is fast if you can't produce near enough to meet the demand. You should be limited by YOUR production of your product, not some part ON your product on top of your own issues.

    I still can't wait for vega vs. Titan X/1080ti but it sucks to see AMD might be set up to fail profit wise yet again with their halo product. Samsung HBM2 will likely go to HPC first, and Sk-hynix is mass production in Q3, so AMD will be lucky to get an xmas card out, let alone ramping it for then. Nvidia meanwhile will be using GDDR5x that has been produced for a year by the time 1080ti ships and was already cheaper to move to from GDDR5.

    As far as the closed garden goes, no different than AMD with mantle which never was designed for others, they just failed to push their own API due to small market share etc. How would NV make gamestream (of games)/gsync work with gpus from AMD? That would require a lot of effort on timings etc to help your competition (uh, silly). Shield can be bought by anyone, it just gets better if you have an NV gpu, which inspires sales. The only reason AMD is even sort of friendly is they are always behind and have no share. When they were ahead, they had $1000 cpus also and all of their chips were more expensive vs. today across the product line and I'd know as a reseller for 8yrs. Today they're cheap because nobody wants them.
    You can however get NV stuff to stream to other devices not owned by NV. It's a work in progress, but still...Again, why would NV want to do that and how much work when vid is encoded via NV's gpu? It's not as easy to control another company's gpu in this case IMHO.

    There is also few other solutions for AMD such as remotr or splashtop. Again, it's AMD (or someone else) who needs to do their homework here, not NV. The lack of an AMD based solution is an AMD funding problem. Nvidia is doing you a favor by offering this feature and since it's on their gpus it will be the best experience they can offer. AMD should be doing the same (adding features!). Adding value to your product is a GOOD thing for customers. It's just one more reason I might buy their gpu vs. AMD unless Vega is VERY good on power, perf and price. The only one I'd waver on there for AMD is price. I'll pay more as long as you win on perf and watts/heat. I won't pay more if you lose either of these as your product is inferior IMHO if that is the case so I'd expect at least some price discount. Though IMHO AMD gives far too much and kills their bottom line repeatedly. Their cpus suck, but their gpus are usually close enough to charge more.

    It's taken NV almost 10yrs to get back to 2007 profits. AMD should quit the price war as it's only hurting them and R&D for cpu/gpu, driver support for dx11, gamestream competitor, etc etc. AMD hasn't made 2006 profits since, well 2006 and they had 3Q's of 58% margins then too! I can't believe management went 480 instead of VEGA first. Vega is 58% margins or more, while 480 is ~30% probably. Since NV still can't keep 1080 in stock there was plenty of room for AMD to be in there making a mint from Vega. Now it will be facing a 1080ti (serving gamers) and titan x (serving rich gamers+prosumers) and likely miss most of xmas sales with a small supply if it even hits before xmas. I believe AMD has good products in the pipeline (ZEN/Vega) but they are useless if they're late and pushed off for low margin stuff instead.

    Vega should be out now (GDDR5x!) and ZEN should be out for back to school and actually ZEN should have been out last year. But instead...We got new consoles (and old xbox1/ps4 started the delays of zen etc) and radeon 480. ~10-15% margins on the consoles (last ones started in single digits, moved to almost 15% according to amd), and probably 25-30% on 480 chips. Both bad decisions. AMD's next 12 months could have been a billion dollar year like NV (maybe even better as Zen has potential to pull down HEDT prices on the to end), but not now.

    Note for AMD fanboys, I'm telling AMD to make more money! I'm telling them to choose products that have HIGHER margins FIRST, and low margin stuff second! AMD can't afford to keep losing money year after year. I own a 5850...LOL. Still waiting for Vega to show me something, then I'll jump. But I have no qualms about saying I'm leaning NV. The days of me "donating" to AMD with an inferior product are over. I still have to rebuild my dad's PC (giving him my Devils Canyon chip and likely the gpu too), so AMD has still has a shot at that cpu too. I won't take "close enough". They have to win or I buy Intel again and Nvidia. The only thing I'll fudge on is price, not perf, watts or heat. You have to win at least one of those and tie the others basically or bust. I have no trouble paying AMD for a WINNER. Other than a few driver issues (not a problem for me as long as it's fixed quickly), I got exactly what I expected from my gpu (low heat/noise with perf).
  • Gastec - Tuesday, September 6, 2016 - link

    Half the price of Titan X? Sorry but if any 1080Ti comes out it will be $800 :)
  • Beerfloat - Friday, July 22, 2016 - link

    We look forward to the Anandtech review sometime in October, then.. 2018.

    I kid, I kid. Nice job on the GP104 Ryan.

Log in

Don't have an account? Sign up now