I'm sure that you're all aware of this but, before you buy, at least consider buying a second hand card from ebay after checking out reviews. You might get a faster card, cheaper, than these new ones. It'll also help with nvidia lowering prices.
We'll show nvidia!!!! You think the Founders Edition price gouging rants were bad, just wait until we are all gaming on paper, and with a pencil controller!
Performance will be very poor, on the order of 3e-8 MFLOPs. Even if the paper and the pencil only cost a cent, price/performance will be very poor as well.
People who are trying to maximize performance out of a 75W PCI-E slot without external power will need to buy these, but there's bound to be plenty of people who do have the connector and are just looking at ~$120 cards.
I'm running a 700W power supply left over from when i had a GTX295, which i just hated, i couldn't get rid of the heat it generates. Incidentally, the power supply is a bit weird and reaches surprisingly high efficiency within the first 20%, so it isn't exactly asking to be loaded up all the way. And i've been using a 750TI OC on it now which has a power supply connector and is unthrottled. I'll be considering a midrange GPU for my next upgrade too. As a matter of fact, most midrange GPU users that built their PC before that, will likely have power supplies equipped correspondingly, because up until recently, you needed 100W for a barely reasonable unit. Besides, there's nothing terribly wrong feeding mere 5A through a double Molex adapter, cheap too.
I don't know, the vanilla 1050 for $109 is going to be a monster card for the price as an entry level gaming card.
But for people thinking about 1060's and up, yeah, consider a used GTX 980 on eBay for ~$200 or if you need more than 4GB VRAM, a used 980Ti 6GB for ~$250 which is pretty much on par with a 1070 at 4K and sells for nearly double.
Seriously its faster than the 950 and that card launched at 159$. So the 1050Ti is a great value at 139$ for faster than 960 performance and one more GB VRAM than the 1060 3GB lulz.
I'm not shy about calling nVidia (or anyone else for that matter) out on price gouging, but that's not really apparent here. Your advice is well stated for the current upper level nVidia cards, but these cards actually seems to be priced relatively well (assuming performance lands where I hope it lands). Perhaps competition is helping here. Contrary to the advice on the upper end cards, I'd actually recommend people buy these cards (if it fits their needs) rather than second hand cards to show nVidia that consumers are perfectly willing to pay a reasonable price.
Well consider the RX 470 competes with the 3GB 1060 fairly well - give or take depending on game, engine and API used - and the 1050 Ti is 54% ((768 * 2 * 1392) / (1152 * 2 * 1709)) of that 1060 ...
As usual, Radeon cards will outperform NVidia cards dollar for dollar in many games, but those quirky ass drivers... I recently ditched my R9 380 because twice a week my PC wouldn't resume the display requiring me to blindly shutdown or remote desktop in to reboot. Just got tired of the BS. Had a problem years ago with my Radeon 7870 causing BSOD's.
I've never had issues on that level with NVidia drivers. And all those people who lost entire motherboards\power supplies from the RX480's...I mean that is a fail on Samsung Note 7 proportions.
Yes! The latest Nvidia GPU I could find in a LP format was a GT730 (and there's 4 different generation of chipsets with that model number!!). Having something more current would be nice.
Does it also need to be single slot? There're half height 2 slot 750s available. A single slot half height card's going to have to wait for a lower power model. I suspect a 40W TDP is feasible in that form factor since that level of GPU can be fit into laptops with coolers similar in size to what can fit on that small of a card.
Haha yeah I'm pretty sure the 730 is identical to the 430, just slightly higher clocked and equipped with a UEFI BIOS (EVGA has BIOS updates available for 6xx series owners to add UEFI, basically all the 7xx equivalent BIOS)
What they need to do is get over this dual width full height card format for these entry level cards. For those with HTPCs, we need half-height dual width cards, full height single slot cards, or (ideally, if it can be done!) single slot half-height cards.
I had an older quadro card which was full height single slot and had a six-pin which meant it was rated for 150ish watts. If they can do that, and these cards draw half the wattage (or less considering the 750ti) why can't they give us a good (even if downclocked slightly) single-slot half-height card?
Yup, I agree completely. I want a single slot card (full height is fine, but half height and single slot is ideal). Knocking the GPU clock back to around 1GHz would be perfectly acceptable to achieve that goal, but I would like 4GB of GDDR5 on a 128-bit interface and 768 cores.
At first brush, it doesn't seem like it's much quicker than a GTX 950. Oh I get the idea that the target is people with 6/750s, but the wattage is too high to be a direct replacement and comparing to last generation is still relevant even if it doesn't paint the 1050 in a very good light.
Did you get your wattage comment backwards? The 75W 1050 is too low in power vs the 90W 950 to be a direct replacement (unless it scales really well with more power anyway). And most GPU comparisons for replacement are done against 2 generation old cards because single generation benefits are generally not large enough to be worth it unless you're pushing the limits of the possible at the very top.
Sorry that I wasn't more clear. NV is suggesting the 1050 is a replacement for the 750. The problem is the 1050 is 75W whereas the 750 was either 60 or 55W. My argument is the increase in TDP doesn't make it a direct replacement. Discussing the 950 relative to the 1050 was more about performance than TDP. I should probably have transitioned better between the two topics.
Because the RND vs Profits is not favorable. Smaller form factors are a niche within a niche. (Lets face it HTPC building isnt very common outside enthusiast circles)
I built a dedicated HTPC this January. I bought a full sized ATX case from Lian-Li so I could do whatever I want. In my case, I went with a GTX 950. Not sure why you insist on your chosen form factor when it doesn't fit your desired needs.
A card that requires additional power from a six-pin connector doesn't necessarily draw the maximum rated W from that configuration. Absent any further information, the presence of a six-pin simply indicates that the official TDP probably falls somewhere between 75 and 150W.
I don't know exactly why Quadro's consistently offer higher TDP single slot cards, but I think it's important to keep in mind that Nvidia has complete control over their Quadro lineup. The entire stack ships with reference coolers designed by Nvidia. In contrast that responsibility usually falls on Nvidia's AIB partners for GeForce cards, particularly lower-end (lower TDP) cards.
I suspect because they assume it's noisy in the office anyway, so the loud card doesn't matter much. And that they won't be constantly under as hard a load as in games.
Just because they're not high-end cards doesn't mean tey're easy to cool. 75 W with a single slot cooler is going to be quite loud under sustained load - which doesn't sell well today.
I expect the 2GB 1050 to compete well with the 2GB 460, often beating it.
I expect the 4GB 1050 Ti to beat the 4GB 460 in most things.
I think AMD need to drop 460 prices by $10, and strongly consider a fully enabled Polaris 11 SKU (RX 465) with 1024 shaders, hopefully clocked over 1300MHz, to compete with the 1050 Ti.
The 950 had a launch MSRP of $160 and a TDP of 90W, both of which are a good chunk higher than the 750/750ti. I'm not sure the 950 was aimed quite at the same segment as the 750 and 1050.
I'd expect at least one more wave of Pascal launches in another month or three. The ~40W GP108. Less for the desktop (although there's a market for small fanless GPUs in the HTPC space) but on mobile a 30-40W TDP is a lot easier to fit into a reasonable size laptop than 60 or 75W.
There's really no reasons and profitably to go less than 640 shades. As you can see most space on die is used for other things that is not 3D pipeline. Further reducing it won't give any cost effective advantage.
You may expect this same GPU to have lower frequencies to go lower TDP and possibly a further cut down version. I don't think there will be a smaller complete GPU than this coming from nvidia.
75% of the die size for 60% of the GPU cores (full GP107 vs full GP 106) suggests that GP107 is half GPU cores, half everything else. Unless yields are awful, that still leaves room for a smaller low power model. Significantly lower power is definitely doable: all the video en/decode hardware might take a lot of die area, but H.265 is being implemented at smartphone power levels so that part of the die isn't where all the power is being consumed. Lastly, a notional GP108 would probably be able to more gpu cores for the die area than this scaling estimate suggests because at that price/performance level it'd be DDR3/4 ram which needs a significantly smaller memory controller than GDDR5. (It would probably be vram bandwidth bottlenecked in a number of scenarios, but that's never prevented prior generations of cards at this relative performance level from being commercial successes).
I suspect the H.265 support on the smartphone chips may be partial implementation. Also, looking at the pricing of final product with GP107, a theoretical GP108 would have to be priced significantly lower. There's not a lot of recent technology video cards in that price range any more as Intel's integrated graphics has already eaten into that market space.
The main reason these cards exist is because Intel's integrated GPU is crap. Once a decent 14nm APU hits the mainstream these 'stand alone' entry-level cards will go obsolete unless they can SLI or Crossfire with the integrated GPU
we still have to see about that APU, and even then with dx12 it's supposedly possible to do multi-GPU with cards from different brands all working together so it still plays a role.
I wonder if this use of the Samsung foundry has something to do with the settlement of the patent suits. I also wonder if this means Samsung will similarly be using NVIDIA's services for something in the near future.
Samsung is much less vertically integrated than one thinks. So the division that operates fabs bills their smartphone division just like customer. Only difference being the get first access. No I do not think this has anything to do with the patent stuff.
The reason is probably price (cheaper) and availability as TSMC16nm capacity is probably gobbled up mostly by Apple and the likes and NV can't compete there with a low margin product such as entry level GPUs.
Seems Nvidia is at a bit of a disadvantage in regards to pricing power as the die size of 135mm is considerably larger than the RX 460 which comes in at 123mm. So at full wafer yield AMD will have an 11% advantage over Nvidia (492 die vs. 445 die). AMD is logically a lot closer to full yield right now since they have been producing the 460 for two months.
Of course, if the 1050 performs considerably better than the 460, Nvidia could justify higher prices. We shall see soon enough once reviews come out.
The RX 460 is the highest binned part coming from its die. The fully enabled GP107 is the 1050 Ti and it is faster than the RX 460 and priced higher. The 1050 is a product of a lower-tier bin from its die. Being that there is a higher priced part coming from the die, your cost analysis of the 1050 is flawed. A more detailed analysis would be necessary. To get a good estimate, besides taking the 1050 Ti into consideration, mobile parts and yield curves (particularly since they are being made by two different foundries) would need to be considered, not just die size.
To be more specific, I think there are two issues. Firstly, since the 1050 is cut down, it's possible that a greater percentage of dies from a wafer qualify to be made into a 1050 than do dies of 460s from their respective wafer. Secondly, the cost of the wafer is spread over all dies produced from that wafer. But it isn't properly spread simply by area of the wafer used for each product. Rather total margin per wafer must be considered for the entire product mix arising from that wafer. Butchers can afford to sell bones, suet, and other scraps for less than $2 a pound because New York Strips go for $15 a pound. If all usable parts of the cow went for $2 a pound it probably wouldn't pay to raise beef.
You are perfectly right. Add to this the fact that x07 chips from nvidia are always the first one produced to test a new architecture, it is probably a longer time nvidia knows how to better produce the GP107 than AMD with the Polaris 11 that was shown doing miracles last December but went on production just recently with awful power draws (weren't it to consume less than a castrated GM108 performing better?).
Probably nvidia made GP107 on both factories to see the differences and decided to go mass production on Samsung for various reasons: costs, wafer availability (they have to be many as they are going to sell tons of these boards), performances not too distant from 16nm and many others we do not know (like letting Samsung take confidence with a GPU production they may use in future into their SoCs).
By the way, as you said, on same PP and almost same area and with less power, nvidia solution is going to crush Polaris 11 in all aspects. Thus Polaris resulted in another let down that is not helping AMD to make money (very low margins as it was with GCN that could not stand Kepler and Maxwell efficiency in both perf/W and perf/area).
No, RX460 also uses a salvage part.. due to whatever reason. They might as well replace it with a fully enabled RX465 for 10$ more now, otherwise GTX 1050 seems to be the clear winner. But that's not really AMDs style (e.g. R9 285).
Isn't it a bit odd that the $199 1060 comes with only 3 Gb of RAM, but the 1050 Ti comes with 4? And isn't the 1050 going to be hamstrung with only 2 Gb?
The 1050 Ti probably doesn't benefit from 4 GB of RAM over 2 GB. Certainly 3 GB would be enough for it, just as it is enough for the 1060. But with the bus width NVIDIA used, it needed to be 2 or 4 for the 1050 and 3 or 6 for the 1060. I don't think DRAM chips with the combination of density and width exist in high volume that allow 4GB VRAM with a 192-bit bus width.
The DRAM on graphics cards is probably driven by competition to allay consumer anxiety to capacities higher than necessary for real usage.
You are wrong here. GTX1060 3GB is a glass cannon- there are already games that run horribly on it on good settings 1080p because of low VRAM (Forza Horizon 3). Buying a card with 2 or 3GB of memory is only ok if you plan to change it soon, play just esports titles, or only certain AAA title that happens to use little VRAM. 2GB VRAM is very limiting- just check Doom benchmarks 4GB vs 2GB on RX460.
The 1050's more than twice as slow as the 1060, so having 2/3rds of the memory capacity might be enough for what it can do. Definitely something to look out for in testing once the cards start to ship.
The GTX 1060 is a compromise card. You get it if you wanted the 1060 6GB but you don't want to pay the extra $50. Honestly, I blame Forza Horizon 3 high VRAM use issue on the devs not bothering to optimize for a wider set of hardware. How many other games do you know of that suffers from low performance because of lack of VRAM?
Like I said, Doom also runs way better on 4GB RX460 than 2GB. And latest release- Gears of War 4 on Ultra settings looks much worse on 2-3 GB cards, vs same cards with 4-6GB: http://www.techspot.com/review/1263-gears-of-war-4... So library of examples is getting bigger
3GB should be fine for the 1080p gaming crowd, aside from the few outliers that people are making a stink about. If you're concerned about having EVERY base covered, you can pay the extra $50 for the 6GB models.
As for the 2GB memory "hamstringing" the 1050: At that price range, I don't think that's even a consideration. I would like to remind you that the GTX 960 was a 2GB part at the beginning, the 4GB parts only showed up 6 months into the product cycle. There really wasn't much griping about the 2GB memory size, except the people who wanted to SLI their 960s.
I think Nvidia wanted to see how 14nm performed in the high end desktop performance end. Granted a 1050 is not that powerful, but it has voltages going through 14nm that have never went through it before.
this will let nvidia know which fab to choose for Volta
Seen the limited wafer availability TMSC has, they may just looking to a cheap(er) way to produce small but high quantity GPUs with a less refined (but as said cheaper) PP. Big,power angry chip done with better PP, and less power critical ones done with a cheaper PP. Not that bad, seeing nvidia is not risking anything doing so seen the competition is using this less refined PP for their whole production.
" Following their near-perfect top-down launch schedule that started with GeForce GTX 1080 in May"
I really enjoyed this article. While I think I understand what you mean by near-perfect, it reads awfully fanboyish. Other than that, this is as good a review as could have been done without the cards in hand, very well written Ryan. Thanks for the info!
The elephant in the room here is that a lot of these <75W cards are going to be used primarily in HTPCs. But how many 4K HDR TVs are you ever going to see with G-Sync ? A 460 with 'zero fan' technology makes far more sense than either Nvidia card for an HTPC. And will be better at DX12/Vulkan too.
A better question might be how many people are actually going to buy 4K TVs period much less 4K monitors with G-sync or Freesync. An Integrated Intel GPU makes far more sense than a large chunky gaming card as it allows you far more freedom of chassis form factors as well as likelihood of a completely fanless solution. An "HTPC" tower with a gaming card is just a gaming PC you watch movies on.
So in a nutshell the GTX 1050 is actually about the same performance (maybe a bit more) as a 90W GTX 950 (which isn't mentioned in the article). Probably around the same price too with the new release Nvidia tax. The GTX 1050 Ti is going to be about like a GTX 960 4GB, only more efficient and cheaper, and again, maybe slightly faster. Seems like everything new below the RX 470 and GTX 1060 3GB is kind of ho-hum.
Will be interesting to see if any manufacture puts together a dual 1050ti to see about coming between the 1060 / 1070 in performance. (depending of course on how these chips perform)
I seriously doubt either the 1050ti or the 1050 will be SLI enabled. Seeing how the 1060 isn't capable of SLI, why would you think either 1050's would?
Clockspeeds and OC potential are really whats in question here. Given enough power, a 1050 should be able to hit 2ghz, if it was on the same node. But this is samsung and not TSMC, so it will be interesting to see how much OC room there is.
I got a 750 ti more than 3 years ago, andit took this long to really replace it. It was a card ahead of its time, and was very impressive.
The 1050 ti really is the way to go. You get ~2.1 teraflops, 4gb and low power consumption for $140. 128 bit bus kinda small, but even still this card should outperform consoles. Plus pascal's memory compression helps some.
The 1050 would be really good if it wasn't for the claustrophobic 2gb
The next step up from integrated GPUs?. Really? Are you kidding us?. NVIDIA is claiming that 1050 is showing 61.2 fps average at 1080p medium settings for games like The Division, Fallout 4 and GTA V, so imagine the 1050ti. And still you have to count updated drivers in the next few months and the overclock margin if there's a PCIe.
Maybe I'm lost, but can Skylake GPU's run that fast or offer perfomance even close to a 960 or over?. The 1050ti is showing improved perfomance over the 960 with a lower TDP 75w vs 120w, 4GB VRAM, Pascal architecture and a price tag of $140 when the 960 was like $200 or so.
However I agree there are still some mysteries uncovered about the GP107 and Samsung 14nm.
technically correct, but "next step up from integrated" does have a negative connotation that probably isn't well-deserved if you're running FO4 at 60 FPS on 1080p.
it's a pretty big step up, to the point where you wouldn't actually want to use it as a "step" if you saw it in real life.
Heh heh, I actually managed to run Fallout 4 on an Iris Pro 6200 at 1080p (Broadwell). I think I got up to about 25fps-ish on lowi-sh (custom) settings...
Watch your step. that next step up...it's a really big step .:3
"Feature size is a red herring here, and instead the significance of this deal is that NVIDIA has not used a fab other than TSMC for GPUs for a long time. In fact we’d have to go back to 2003 to find an NVIDIA GPU fabbed somewhere else, when NVIDIA tapped IBM to help fab the ill-fated NV3x series (GeForce FX)."
Actually there's some more recent than that: Nvidia has manufactured some of 9000 -series cards on UMC fabs, i.e. if I remember correctly g96 was on UMCs 65nm.
These are excellent SKUs for the average joe. The folks complaining about the 2GB VRAM need to understand the ppl buying these are the same ones getting humble bundle packs that consist of games 3-5 years old. In doing so, they can build a console grade library of games for really cheap for the kids when paired with a refurb'd Dell desktop.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
93 Comments
Back to Article
damianrobertjones - Tuesday, October 18, 2016 - link
I'm sure that you're all aware of this but, before you buy, at least consider buying a second hand card from ebay after checking out reviews. You might get a faster card, cheaper, than these new ones. It'll also help with nvidia lowering prices.prisonerX - Tuesday, October 18, 2016 - link
Or, to save even more money, you could draw your own 3D images with paper and pencil.Morawka - Tuesday, October 18, 2016 - link
We'll show nvidia!!!! You think the Founders Edition price gouging rants were bad, just wait until we are all gaming on paper, and with a pencil controller!ant6n - Wednesday, October 19, 2016 - link
Performance will be very poor, on the order of 3e-8 MFLOPs. Even if the paper and the pencil only cost a cent, price/performance will be very poor as well.Sivar - Wednesday, October 19, 2016 - link
Finally, a serious analysis! :)lolipopman - Tuesday, October 18, 2016 - link
Do you really think the people that buy these cards would be willing to spend on new power supplies and additional power connectors?peterfares - Tuesday, October 18, 2016 - link
People who are trying to maximize performance out of a 75W PCI-E slot without external power will need to buy these, but there's bound to be plenty of people who do have the connector and are just looking at ~$120 cards.Siana - Monday, November 28, 2016 - link
I'm running a 700W power supply left over from when i had a GTX295, which i just hated, i couldn't get rid of the heat it generates. Incidentally, the power supply is a bit weird and reaches surprisingly high efficiency within the first 20%, so it isn't exactly asking to be loaded up all the way. And i've been using a 750TI OC on it now which has a power supply connector and is unthrottled. I'll be considering a midrange GPU for my next upgrade too. As a matter of fact, most midrange GPU users that built their PC before that, will likely have power supplies equipped correspondingly, because up until recently, you needed 100W for a barely reasonable unit. Besides, there's nothing terribly wrong feeding mere 5A through a double Molex adapter, cheap too.Samus - Wednesday, October 19, 2016 - link
I don't know, the vanilla 1050 for $109 is going to be a monster card for the price as an entry level gaming card.But for people thinking about 1060's and up, yeah, consider a used GTX 980 on eBay for ~$200 or if you need more than 4GB VRAM, a used 980Ti 6GB for ~$250 which is pretty much on par with a 1070 at 4K and sells for nearly double.
firerod1 - Wednesday, October 19, 2016 - link
Seriously its faster than the 950 and that card launched at 159$. So the 1050Ti is a great value at 139$ for faster than 960 performance and one more GB VRAM than the 1060 3GB lulz.Nagorak - Sunday, October 23, 2016 - link
It's a couple years later. More performance for lower price should be taken for granted.webdoctors - Sunday, October 23, 2016 - link
Nagorak, I don't think Intel's CPU division got the memo...BurntMyBacon - Wednesday, October 19, 2016 - link
@damianrobertjonesI'm not shy about calling nVidia (or anyone else for that matter) out on price gouging, but that's not really apparent here. Your advice is well stated for the current upper level nVidia cards, but these cards actually seems to be priced relatively well (assuming performance lands where I hope it lands). Perhaps competition is helping here. Contrary to the advice on the upper end cards, I'd actually recommend people buy these cards (if it fits their needs) rather than second hand cards to show nVidia that consumers are perfectly willing to pay a reasonable price.
The_Assimilator - Tuesday, October 18, 2016 - link
Gonna be interesting to see how close to RX 470 the 1050 Ti gets.psychobriggsy - Tuesday, October 18, 2016 - link
Well consider the RX 470 competes with the 3GB 1060 fairly well - give or take depending on game, engine and API used - and the 1050 Ti is 54% ((768 * 2 * 1392) / (1152 * 2 * 1709)) of that 1060 ...Samus - Wednesday, October 19, 2016 - link
As usual, Radeon cards will outperform NVidia cards dollar for dollar in many games, but those quirky ass drivers... I recently ditched my R9 380 because twice a week my PC wouldn't resume the display requiring me to blindly shutdown or remote desktop in to reboot. Just got tired of the BS. Had a problem years ago with my Radeon 7870 causing BSOD's.I've never had issues on that level with NVidia drivers. And all those people who lost entire motherboards\power supplies from the RX480's...I mean that is a fail on Samsung Note 7 proportions.
vladx - Wednesday, October 19, 2016 - link
If only Polaris cards were sold at their MSRP price or close to that, then I'd agree with you. Sadly, reality is a whole other bargain.lolipopman - Tuesday, October 18, 2016 - link
I really can't see the Ti being better than the 470.CiccioB - Tuesday, October 18, 2016 - link
It won't. That is why it cost so much lessBrokenCrayons - Tuesday, October 18, 2016 - link
We won't know until the RX 470 and 460 reviews are posted so we have something to compare the 1050 with when it launches. :Dfirerod1 - Wednesday, October 19, 2016 - link
Its going to be faster than a 960, which is about 25-30% slower than a RX 470.nathanddrews - Tuesday, October 18, 2016 - link
Broken record:I hope we get a few low-profile/SFF versions of the 1050...
yhselp - Tuesday, October 18, 2016 - link
Broken record indeed, but here's hoping they actually do it.phoenix_rizzen - Tuesday, October 18, 2016 - link
Yes! The latest Nvidia GPU I could find in a LP format was a GT730 (and there's 4 different generation of chipsets with that model number!!). Having something more current would be nice.DanNeely - Tuesday, October 18, 2016 - link
Does it also need to be single slot? There're half height 2 slot 750s available. A single slot half height card's going to have to wait for a lower power model. I suspect a 40W TDP is feasible in that form factor since that level of GPU can be fit into laptops with coolers similar in size to what can fit on that small of a card.Ej24 - Thursday, October 20, 2016 - link
55W is the typical cutoff for that form factor. You can find 750ti's and R7 250E/X (HD 7750) in that size. They're unbelievable tiny.Samus - Wednesday, October 19, 2016 - link
Haha yeah I'm pretty sure the 730 is identical to the 430, just slightly higher clocked and equipped with a UEFI BIOS (EVGA has BIOS updates available for 6xx series owners to add UEFI, basically all the 7xx equivalent BIOS)bill.rookard - Tuesday, October 18, 2016 - link
What they need to do is get over this dual width full height card format for these entry level cards. For those with HTPCs, we need half-height dual width cards, full height single slot cards, or (ideally, if it can be done!) single slot half-height cards.I had an older quadro card which was full height single slot and had a six-pin which meant it was rated for 150ish watts. If they can do that, and these cards draw half the wattage (or less considering the 750ti) why can't they give us a good (even if downclocked slightly) single-slot half-height card?
BrokenCrayons - Tuesday, October 18, 2016 - link
Yup, I agree completely. I want a single slot card (full height is fine, but half height and single slot is ideal). Knocking the GPU clock back to around 1GHz would be perfectly acceptable to achieve that goal, but I would like 4GB of GDDR5 on a 128-bit interface and 768 cores.At first brush, it doesn't seem like it's much quicker than a GTX 950. Oh I get the idea that the target is people with 6/750s, but the wattage is too high to be a direct replacement and comparing to last generation is still relevant even if it doesn't paint the 1050 in a very good light.
DanNeely - Tuesday, October 18, 2016 - link
Did you get your wattage comment backwards? The 75W 1050 is too low in power vs the 90W 950 to be a direct replacement (unless it scales really well with more power anyway). And most GPU comparisons for replacement are done against 2 generation old cards because single generation benefits are generally not large enough to be worth it unless you're pushing the limits of the possible at the very top.BrokenCrayons - Tuesday, October 18, 2016 - link
Sorry that I wasn't more clear. NV is suggesting the 1050 is a replacement for the 750. The problem is the 1050 is 75W whereas the 750 was either 60 or 55W. My argument is the increase in TDP doesn't make it a direct replacement. Discussing the 950 relative to the 1050 was more about performance than TDP. I should probably have transitioned better between the two topics.LukaP - Tuesday, October 18, 2016 - link
Because the RND vs Profits is not favorable. Smaller form factors are a niche within a niche. (Lets face it HTPC building isnt very common outside enthusiast circles)nathanddrews - Tuesday, October 18, 2016 - link
HTPC is niche, but SFF is huge in enterprise.bigboxes - Tuesday, October 18, 2016 - link
I built a dedicated HTPC this January. I bought a full sized ATX case from Lian-Li so I could do whatever I want. In my case, I went with a GTX 950. Not sure why you insist on your chosen form factor when it doesn't fit your desired needs.dragonsqrrl - Tuesday, October 18, 2016 - link
A card that requires additional power from a six-pin connector doesn't necessarily draw the maximum rated W from that configuration. Absent any further information, the presence of a six-pin simply indicates that the official TDP probably falls somewhere between 75 and 150W.I don't know exactly why Quadro's consistently offer higher TDP single slot cards, but I think it's important to keep in mind that Nvidia has complete control over their Quadro lineup. The entire stack ships with reference coolers designed by Nvidia. In contrast that responsibility usually falls on Nvidia's AIB partners for GeForce cards, particularly lower-end (lower TDP) cards.
MrSpadge - Tuesday, October 18, 2016 - link
I suspect because they assume it's noisy in the office anyway, so the loud card doesn't matter much. And that they won't be constantly under as hard a load as in games.MrSpadge - Tuesday, October 18, 2016 - link
Just because they're not high-end cards doesn't mean tey're easy to cool. 75 W with a single slot cooler is going to be quite loud under sustained load - which doesn't sell well today.psychobriggsy - Tuesday, October 18, 2016 - link
I expect the 2GB 1050 to compete well with the 2GB 460, often beating it.I expect the 4GB 1050 Ti to beat the 4GB 460 in most things.
I think AMD need to drop 460 prices by $10, and strongly consider a fully enabled Polaris 11 SKU (RX 465) with 1024 shaders, hopefully clocked over 1300MHz, to compete with the 1050 Ti.
xprojected - Tuesday, October 18, 2016 - link
Any particular reason the GTX 950 isn't mentioned at all? It's still widely available, and the article reads like it never existed.lolipopman - Tuesday, October 18, 2016 - link
It's not popular at all.Ryan Smith - Tuesday, October 18, 2016 - link
Because it is a cut-down GM206. Despite the naming convention of the retail parts, the underlying GP107 GPU has more to do with GM107.jordanclock - Tuesday, October 18, 2016 - link
The 950 had a launch MSRP of $160 and a TDP of 90W, both of which are a good chunk higher than the 750/750ti. I'm not sure the 950 was aimed quite at the same segment as the 750 and 1050.DanNeely - Tuesday, October 18, 2016 - link
I'd expect at least one more wave of Pascal launches in another month or three. The ~40W GP108. Less for the desktop (although there's a market for small fanless GPUs in the HTPC space) but on mobile a 30-40W TDP is a lot easier to fit into a reasonable size laptop than 60 or 75W.CiccioB - Tuesday, October 18, 2016 - link
There's really no reasons and profitably to go less than 640 shades. As you can see most space on die is used for other things that is not 3D pipeline. Further reducing it won't give any cost effective advantage.You may expect this same GPU to have lower frequencies to go lower TDP and possibly a further cut down version. I don't think there will be a smaller complete GPU than this coming from nvidia.
DanNeely - Tuesday, October 18, 2016 - link
75% of the die size for 60% of the GPU cores (full GP107 vs full GP 106) suggests that GP107 is half GPU cores, half everything else. Unless yields are awful, that still leaves room for a smaller low power model. Significantly lower power is definitely doable: all the video en/decode hardware might take a lot of die area, but H.265 is being implemented at smartphone power levels so that part of the die isn't where all the power is being consumed. Lastly, a notional GP108 would probably be able to more gpu cores for the die area than this scaling estimate suggests because at that price/performance level it'd be DDR3/4 ram which needs a significantly smaller memory controller than GDDR5. (It would probably be vram bandwidth bottlenecked in a number of scenarios, but that's never prevented prior generations of cards at this relative performance level from being commercial successes).Namisecond - Saturday, October 22, 2016 - link
I suspect the H.265 support on the smartphone chips may be partial implementation. Also, looking at the pricing of final product with GP107, a theoretical GP108 would have to be priced significantly lower. There's not a lot of recent technology video cards in that price range any more as Intel's integrated graphics has already eaten into that market space.zangheiv - Tuesday, October 18, 2016 - link
The main reason these cards exist is because Intel's integrated GPU is crap. Once a decent 14nm APU hits the mainstream these 'stand alone' entry-level cards will go obsolete unless they can SLI or Crossfire with the integrated GPUMurloc - Tuesday, October 18, 2016 - link
we still have to see about that APU, and even then with dx12 it's supposedly possible to do multi-GPU with cards from different brands all working together so it still plays a role.Yojimbo - Tuesday, October 18, 2016 - link
I wonder if this use of the Samsung foundry has something to do with the settlement of the patent suits. I also wonder if this means Samsung will similarly be using NVIDIA's services for something in the near future.CiccioB - Tuesday, October 18, 2016 - link
Maybe nvidia Tegra technology into Samsung SoC. That would provide a really powerful SoC.beginner99 - Wednesday, October 19, 2016 - link
Samsung is much less vertically integrated than one thinks. So the division that operates fabs bills their smartphone division just like customer. Only difference being the get first access. No I do not think this has anything to do with the patent stuff.The reason is probably price (cheaper) and availability as TSMC16nm capacity is probably gobbled up mostly by Apple and the likes and NV can't compete there with a low margin product such as entry level GPUs.
slipdisk - Tuesday, October 18, 2016 - link
Any news on a 1050 or 1050ti mobile part?lolipopman - Tuesday, October 18, 2016 - link
No mobile parts. Desktop ones will be used in laptops.lolipopman - Tuesday, October 18, 2016 - link
What's with the wattage?Also I'm absolutely loving the price!
MrSpadge - Tuesday, October 18, 2016 - link
It's 75!Intel999 - Tuesday, October 18, 2016 - link
Seems Nvidia is at a bit of a disadvantage in regards to pricing power as the die size of 135mm is considerably larger than the RX 460 which comes in at 123mm. So at full wafer yield AMD will have an 11% advantage over Nvidia (492 die vs. 445 die). AMD is logically a lot closer to full yield right now since they have been producing the 460 for two months.Of course, if the 1050 performs considerably better than the 460, Nvidia could justify higher prices. We shall see soon enough once reviews come out.
Yojimbo - Tuesday, October 18, 2016 - link
The RX 460 is the highest binned part coming from its die. The fully enabled GP107 is the 1050 Ti and it is faster than the RX 460 and priced higher. The 1050 is a product of a lower-tier bin from its die. Being that there is a higher priced part coming from the die, your cost analysis of the 1050 is flawed. A more detailed analysis would be necessary. To get a good estimate, besides taking the 1050 Ti into consideration, mobile parts and yield curves (particularly since they are being made by two different foundries) would need to be considered, not just die size.Yojimbo - Tuesday, October 18, 2016 - link
To be more specific, I think there are two issues. Firstly, since the 1050 is cut down, it's possible that a greater percentage of dies from a wafer qualify to be made into a 1050 than do dies of 460s from their respective wafer. Secondly, the cost of the wafer is spread over all dies produced from that wafer. But it isn't properly spread simply by area of the wafer used for each product. Rather total margin per wafer must be considered for the entire product mix arising from that wafer. Butchers can afford to sell bones, suet, and other scraps for less than $2 a pound because New York Strips go for $15 a pound. If all usable parts of the cow went for $2 a pound it probably wouldn't pay to raise beef.CiccioB - Tuesday, October 18, 2016 - link
You are perfectly right. Add to this the fact that x07 chips from nvidia are always the first one produced to test a new architecture, it is probably a longer time nvidia knows how to better produce the GP107 than AMD with the Polaris 11 that was shown doing miracles last December but went on production just recently with awful power draws (weren't it to consume less than a castrated GM108 performing better?).Probably nvidia made GP107 on both factories to see the differences and decided to go mass production on Samsung for various reasons: costs, wafer availability (they have to be many as they are going to sell tons of these boards), performances not too distant from 16nm and many others we do not know (like letting Samsung take confidence with a GPU production they may use in future into their SoCs).
By the way, as you said, on same PP and almost same area and with less power, nvidia solution is going to crush Polaris 11 in all aspects.
Thus Polaris resulted in another let down that is not helping AMD to make money (very low margins as it was with GCN that could not stand Kepler and Maxwell efficiency in both perf/W and perf/area).
MrSpadge - Tuesday, October 18, 2016 - link
No, RX460 also uses a salvage part.. due to whatever reason. They might as well replace it with a fully enabled RX465 for 10$ more now, otherwise GTX 1050 seems to be the clear winner. But that's not really AMDs style (e.g. R9 285).azok - Tuesday, October 18, 2016 - link
The obvious question would be 'can I play crysis at 60 fps on high settings?'HollyDOL - Thursday, October 20, 2016 - link
Ofc... even cpu graphics can do that... as long as your resolution is low enough ;-)Meteor2 - Tuesday, October 18, 2016 - link
Isn't it a bit odd that the $199 1060 comes with only 3 Gb of RAM, but the 1050 Ti comes with 4? And isn't the 1050 going to be hamstrung with only 2 Gb?Yojimbo - Tuesday, October 18, 2016 - link
The 1050 Ti probably doesn't benefit from 4 GB of RAM over 2 GB. Certainly 3 GB would be enough for it, just as it is enough for the 1060. But with the bus width NVIDIA used, it needed to be 2 or 4 for the 1050 and 3 or 6 for the 1060. I don't think DRAM chips with the combination of density and width exist in high volume that allow 4GB VRAM with a 192-bit bus width.The DRAM on graphics cards is probably driven by competition to allay consumer anxiety to capacities higher than necessary for real usage.
neblogai - Wednesday, October 19, 2016 - link
You are wrong here. GTX1060 3GB is a glass cannon- there are already games that run horribly on it on good settings 1080p because of low VRAM (Forza Horizon 3). Buying a card with 2 or 3GB of memory is only ok if you plan to change it soon, play just esports titles, or only certain AAA title that happens to use little VRAM. 2GB VRAM is very limiting- just check Doom benchmarks 4GB vs 2GB on RX460.DanNeely - Wednesday, October 19, 2016 - link
The 1050's more than twice as slow as the 1060, so having 2/3rds of the memory capacity might be enough for what it can do. Definitely something to look out for in testing once the cards start to ship.Namisecond - Saturday, October 22, 2016 - link
I'd estimate the 2GB 1050 would run at around half the speed of the 3GB 1060.Namisecond - Saturday, October 22, 2016 - link
The GTX 1060 is a compromise card. You get it if you wanted the 1060 6GB but you don't want to pay the extra $50. Honestly, I blame Forza Horizon 3 high VRAM use issue on the devs not bothering to optimize for a wider set of hardware. How many other games do you know of that suffers from low performance because of lack of VRAM?neblogai - Sunday, October 23, 2016 - link
Like I said, Doom also runs way better on 4GB RX460 than 2GB. And latest release- Gears of War 4 on Ultra settings looks much worse on 2-3 GB cards, vs same cards with 4-6GB: http://www.techspot.com/review/1263-gears-of-war-4...So library of examples is getting bigger
Namisecond - Saturday, October 22, 2016 - link
3GB should be fine for the 1080p gaming crowd, aside from the few outliers that people are making a stink about. If you're concerned about having EVERY base covered, you can pay the extra $50 for the 6GB models.As for the 2GB memory "hamstringing" the 1050: At that price range, I don't think that's even a consideration. I would like to remind you that the GTX 960 was a 2GB part at the beginning, the 4GB parts only showed up 6 months into the product cycle. There really wasn't much griping about the 2GB memory size, except the people who wanted to SLI their 960s.
Morawka - Tuesday, October 18, 2016 - link
I think Nvidia wanted to see how 14nm performed in the high end desktop performance end. Granted a 1050 is not that powerful, but it has voltages going through 14nm that have never went through it before.this will let nvidia know which fab to choose for Volta
CiccioB - Tuesday, October 18, 2016 - link
Seen the limited wafer availability TMSC has, they may just looking to a cheap(er) way to produce small but high quantity GPUs with a less refined (but as said cheaper) PP.Big,power angry chip done with better PP, and less power critical ones done with a cheaper PP.
Not that bad, seeing nvidia is not risking anything doing so seen the competition is using this less refined PP for their whole production.
fanofanand - Tuesday, October 18, 2016 - link
With the exception of this statement:" Following their near-perfect top-down launch schedule that started with GeForce GTX 1080 in May"
I really enjoyed this article. While I think I understand what you mean by near-perfect, it reads awfully fanboyish. Other than that, this is as good a review as could have been done without the cards in hand, very well written Ryan. Thanks for the info!
Haawser - Tuesday, October 18, 2016 - link
The elephant in the room here is that a lot of these <75W cards are going to be used primarily in HTPCs. But how many 4K HDR TVs are you ever going to see with G-Sync ? A 460 with 'zero fan' technology makes far more sense than either Nvidia card for an HTPC. And will be better at DX12/Vulkan too.Namisecond - Saturday, October 22, 2016 - link
A better question might be how many people are actually going to buy 4K TVs period much less 4K monitors with G-sync or Freesync. An Integrated Intel GPU makes far more sense than a large chunky gaming card as it allows you far more freedom of chassis form factors as well as likelihood of a completely fanless solution. An "HTPC" tower with a gaming card is just a gaming PC you watch movies on.Tunnah - Tuesday, October 18, 2016 - link
Cards expected 25/10/2016.Review expected 25/10/2017
(Sorry couldn't resist hehe)
Leyawiin - Tuesday, October 18, 2016 - link
So in a nutshell the GTX 1050 is actually about the same performance (maybe a bit more) as a 90W GTX 950 (which isn't mentioned in the article). Probably around the same price too with the new release Nvidia tax. The GTX 1050 Ti is going to be about like a GTX 960 4GB, only more efficient and cheaper, and again, maybe slightly faster. Seems like everything new below the RX 470 and GTX 1060 3GB is kind of ho-hum.wintermute000 - Tuesday, October 18, 2016 - link
I dunno, GTX960 performance on 75W (so can be LP or single slot ) is really neat IMOdocbones - Tuesday, October 18, 2016 - link
Will be interesting to see if any manufacture puts together a dual 1050ti to see about coming between the 1060 / 1070 in performance. (depending of course on how these chips perform)ajp_anton - Tuesday, October 18, 2016 - link
Dual 1050Ti will probably perform the same as a 1060, in games where SLI actually works. And it would cost more, and draw more power.wolfemane - Tuesday, October 18, 2016 - link
I seriously doubt either the 1050ti or the 1050 will be SLI enabled. Seeing how the 1060 isn't capable of SLI, why would you think either 1050's would?TallestJon96 - Tuesday, October 18, 2016 - link
Clockspeeds and OC potential are really whats in question here. Given enough power, a 1050 should be able to hit 2ghz, if it was on the same node. But this is samsung and not TSMC, so it will be interesting to see how much OC room there is.I got a 750 ti more than 3 years ago, andit took this long to really replace it. It was a card ahead of its time, and was very impressive.
The 1050 ti really is the way to go. You get ~2.1 teraflops, 4gb and low power consumption for $140. 128 bit bus kinda small, but even still this card should outperform consoles. Plus pascal's memory compression helps some.
The 1050 would be really good if it wasn't for the claustrophobic 2gb
Namisecond - Saturday, October 22, 2016 - link
The 1050 isn't for enthusisast, the 2GB memory is fine for the $110 MSRP.beck2050 - Tuesday, October 18, 2016 - link
Nvidia rolls on. They have the marketing power in the channels to sell Ioads of these, no problem.Eloymm - Tuesday, October 18, 2016 - link
The next step up from integrated GPUs?. Really? Are you kidding us?. NVIDIA is claiming that 1050 is showing 61.2 fps average at 1080p medium settings for games like The Division, Fallout 4 and GTA V, so imagine the 1050ti. And still you have to count updated drivers in the next few months and the overclock margin if there's a PCIe.Maybe I'm lost, but can Skylake GPU's run that fast or offer perfomance even close to a 960 or over?. The 1050ti is showing improved perfomance over the 960 with a lower TDP 75w vs 120w, 4GB VRAM, Pascal architecture and a price tag of $140 when the 960 was like $200 or so.
However I agree there are still some mysteries uncovered about the GP107 and Samsung 14nm.
Meteor2 - Wednesday, October 19, 2016 - link
So... in what way are these not the next step up from iGPU? Is there a card at less than $109?meorah - Thursday, October 20, 2016 - link
technically correct, but "next step up from integrated" does have a negative connotation that probably isn't well-deserved if you're running FO4 at 60 FPS on 1080p.it's a pretty big step up, to the point where you wouldn't actually want to use it as a "step" if you saw it in real life.
Namisecond - Saturday, October 22, 2016 - link
Heh heh, I actually managed to run Fallout 4 on an Iris Pro 6200 at 1080p (Broadwell). I think I got up to about 25fps-ish on lowi-sh (custom) settings...Watch your step. that next step up...it's a really big step .:3
Ro_Ja - Tuesday, October 18, 2016 - link
Cool! I now have a reason to buy the GTX 950 LOLjabbadap - Wednesday, October 19, 2016 - link
"Feature size is a red herring here, and instead the significance of this deal is that NVIDIA has not used a fab other than TSMC for GPUs for a long time. In fact we’d have to go back to 2003 to find an NVIDIA GPU fabbed somewhere else, when NVIDIA tapped IBM to help fab the ill-fated NV3x series (GeForce FX)."Actually there's some more recent than that: Nvidia has manufactured some of 9000 -series cards on UMC fabs, i.e. if I remember correctly g96 was on UMCs 65nm.
cbm80 - Friday, October 21, 2016 - link
I don't understand why the author refers to the 750 as the "previous" model. The previous model was the 950.Namisecond - Saturday, October 22, 2016 - link
He's talking about GPUs that end with x07, like GM107 and GP107. The 950 is a cut down GM206 chip, therefore not a direct successor.webdoctors - Sunday, October 23, 2016 - link
These are excellent SKUs for the average joe. The folks complaining about the 2GB VRAM need to understand the ppl buying these are the same ones getting humble bundle packs that consist of games 3-5 years old. In doing so, they can build a console grade library of games for really cheap for the kids when paired with a refurb'd Dell desktop.