Following up on last week’s launch of NVIDIA’s new budget video card, the GeForce GTX 1650, today we’re taking a look at our first card, courtesy of Zotac. Coming in at $149, the newest member of the GeForce family brings up the rear of the GeForce product stack, offering NVIDIA’s latest architecture in a low-power, 1080p-with-compromises gaming video card with a lower price to match.

As the third member of the GeForce GTX 16 series, the GTX 1650 directly follows in the footsteps of its GTX 1660 predecessors. Built on a newer, smaller GPU specifically for these sorts of low-end cards, the underlying TU117 GPU is designed around the same leaner and meaner philosophy as TU116 before it. This means it eschews the dedicated ray tracing (RT) cores and the AI-focused tensor cores in favor of making smaller, easier to produce chips that retain the all-important core Turing architecture.

The net result of this process, the GeForce GTX 1650, is a somewhat unassuming card if we’re going by the numbers, but an important one for NVIDIA’s product stack. Though its performance is pedestrian by high-end PC gaming standards, the card fills out NVIDIA’s lineup by offering a modern Turing-powered card under $200. Meanwhile for the low-power video card market, the GTX 1650 is an important shot in the arm, offering the first performance boost for this hard-capped market in over two years. The end result is that the GTX 1650 will serve many masters, and as we’ll see, it serves some better than others.

NVIDIA GeForce Specification Comparison
  GTX 1650 GTX 1660 GTX 1050 Ti GTX 1050
CUDA Cores 896 1408 768 640
ROPs 32 48 32 32
Core Clock 1485MHz 1530MHz 1290MHz 1354MHz
Boost Clock 1665MHz 1785MHz 1392MHz 1455MHz
Memory Clock 8Gbps GDDR5 8Gbps GDDR5 7Gbps GDDR5 7Gbps GDDR5
Memory Bus Width 128-bit 192-bit 128-bit 128-bit
VRAM 4GB 6GB 4GB 2GB
Single Precision Perf. 3 TFLOPS 5 TFLOPS 2.1 TFLOPS 1.9 TFLOPS
TDP 75W 120W 75W 75W
GPU TU117
(200 mm2)
TU116
(284 mm2)
GP107
(132 mm2)
GP107
(132 mm2)
Transistor Count 4.7B 6.6B 3.3B 3.3B
Architecture Turing Turing Pascal Pascal
Manufacturing Process TSMC 12nm "FFN" TSMC 12nm "FFN" Samsung 14nm Samsung 14nm
Launch Date 4/23/2019 3/14/2019 10/25/2016 10/25/2016
Launch Price $149 $219 $139 $109
 

Right off the bat, it’s interesting to note that the GTX 1650 is not using a fully-enabled TU117 GPU. Relative to the full chip, the version that’s going into the GTX 1650 has had a TPC fused off, which means the chip loses 2 SMs/64 CUDA cores. The net result is that the GTX 1650 is a very rare case where NVIDIA doesn’t put their best foot forward right off the bat – the company is essentially sandbagging – which is a point I’ll loop back around to here in a bit.

Within NVIDIA’s historical product stack, it’s somewhat difficult to place the GTX 1650. Officially it’s the successor to the GTX 1050, which itself was a similar cut-down card. However the GTX 1050 also launched at $109, whereas the GTX 1650 launches at $149, a hefty 37% generation-over-generation price increase. Consequently, you could be excused if you thought the GTX 1650 felt a lot more like the GTX 1050 Ti’s successor, as the $149 price tag is very comparable to the GTX 1050 Ti’s $139 launch price. Either way, generation-over-generation, Turing cards have been more expensive than the Pascal cards they have replaced, and the low price of these budget cards really amplifies this difference.

Diving into the numbers then, the GTX 1650 ships with 896 CUDA cores enabled, spread over 2 GPCs. This is actually not all that big of a step up from the GeForce GTX 1050 series on paper, but Turing’s architectural changes and effective increase in graphics efficiency mean that the little card should pack a bit more of a punch than it first shows on paper. The CUDA cores themselves are clocked a bit lower than usual for a Turing card, however, with the reference-clocked GTX 1650 boosting to just 1665MHz.

Rounding out the package is 32 ROPs, which are part of the card’s 4 ROP/L2/Memory clusters. This means the card is being fed by a 128-bit memory bus, which NVIDIA has paired up with GDDR5 memory clocked at 8Gbps. Conveniently enough, this gives the card 128GB/sec of memory bandwidth, which is about 14% more than the last-generation GTX 1050 series cards got. Thankfully, while NVIDIA hasn’t done much to boost memory capacities on the other Turing cards, the same is not true for the GTX 1650: the minimum here is now 4GB, instead of the very constrained 2GB found on the GTX 1050. Not that 4GB is particularly spacious in 2019, however the card shouldn’t be quite so desperate for memory as its predecessor was.

Overall, on paper the GTX 1650 is set to deliver around 60% of the performance of the next card up in NVIDIA’s product stack, the GTX 1660. And in practice, what we'll find is a little better than that, with the new card offering around 65% of a GTX 1660's performance.

Meanwhile, let’s talk about power consumption. With a (reference) TDP of 75W, the smallest member of the Turing family is also the lowest power. 75W cards have been a staple of the low-end video card market – in NVIDIA’s case, this is most xx50 cards – as a 75W TDP means that an additional PCIe power connector is not necessary, and the card can be powered solely off of the PCIe bus.

Overall these cards satisfy a few niche roles that add up to a larger market. The most straightforward of these roles is the need for a video card for basic systems where a PCIe power cable isn’t available, as well as low-power systems where a more power-hungry card isn’t appropriate. For enthusiasts, the focus tends to turn specifically towards HTPC systems, as these sorts of low-power cards are a good physical fit for those compact systems, while also offering the latest video decoding features.

It should be noted however that while the reference TDP for the GTX 1650 is 75W, board partners have been free to design their own cards with higher TDPs. As a result, many of the partner cards on the market are running faster and hotter than NVIDIA’s reference specs in order to maximize their cards’ performance, with TDPs closer to 90W. So anyone specifically looking for a 75W card to take advantage of its low power requirements will want to pay close attention to card specifications to make sure it’s actually a 75W card, like the Zotac card we’re reviewing today.

Product Positioning & The Competition

Shifting gears to business matters, let’s talk about product positioning and hardware availability.

The GeForce GTX 1650 is a hard launch for NVIDIA, and typical for low-end NVIDIA cards, there are no reference cards or reference designs to speak of. In NVIDIA parlance this is a "pure virtual" launch, meaning that NVIDIA’s board partners have been doing their own thing with their respective product lines. These include a range of coolers and form factors, as well as the aforementioned factory overclocked cards that require an external PCIe power connector in order to meet the cards' greater energy needs.

Overall, the GTX 1650 launch has been a relatively low-key affair for NVIDIA. The Turing architecture/feature set has been covered to excess at this point, and the low-end market doesn't attract the same kind of enthusiast attention as the high-end market does, so NVIDIA has been acting accordingly. On our end we're less than thrilled with NVIDIA's decision to prevent reviewers from testing the new card until after it launched, but we're finally here with a card and results in hand.

In terms of product positioning, NVIDIA is primarily pitching the GTX 1650 as an upgrade for the GeForce GTX 950 and its same-generation AMD counterparts, and this has been the same upgrade cadence gap we’ve seen throughout the rest of the GeForce Turing family. As we'll see in our benchmark results, the GTX 1650 offers a significant performance improvement over the GTX 950, while the uplift over the price-comparable GTX 1050 Ti is similar to other Turing cards at around 30%. Meanwhile, one particular advantage that it has here over past-generation cards is that with its 4GB of VRAM, the GTX 1650 doesn't struggle nearly as much on more recent games as the 2GB GTX 950 and GTX 1050 do.

Broadly speaking the GTX xx50 series of cards are meant to be 1080p-with-compromises cards, and GTX 1650 follows this trend. The GTX 1650 can run some games at 1080p at maximum image quality – including some relatively recent games – but in more demanding games it becomes a tradeoff between image quality and 60fps framerates, something the GTX 1660 doesn't really experience.

Unusual this year for NVIDIA, the company is also sweetening the pot a bit by extending their ongoing Fortnite bundle to cover the GTX 1650. The bundle itself isn’t much to write home about – some game currency and skins for a game that’s free to begin with – but it’s an unexpected move since NVIDIA wasn’t offering this bundle on the other GTX 16 series cards when they launched.

Finally, let’s take a look at the competition. AMD of course is riding out the tail-end of the Polaris-based Radeon RX 500 series, so this is what the GTX 1650 will be up against. AMD’s most comparable card in terms of total power consumption is their Radeon RX 560, a card that is simply outclassed by the far more efficient GTX 1650. The GTX 1050 series already overshot the RX 560 here, so the GTX 1650 largely serves to pile on NVIDIA’s efficiency lead, leaving AMD out of the running for 75W cards.

But this doesn’t mean AMD should be counted out altogether. Instead of the RX 560, AMD has setup the Radeon RX 570 8GB against the GTX 1650, which makes for a very interesting battle. The RX 570 is still a very capable card, especially versus the lower performance of the GTX 1650, and its 8GB of VRAM is further icing on the cake. However I’m not entirely convinced that AMD and its partners can hold 8GB card prices to $149 or less over the long run, in which case the competition may end up shifting towards the 4GB RX 570 instead.

In any case, AMD’s position is that while they can’t match the GTX 1650 on features or power efficiency – and bear in mind that the RX 570 is rated to draw almost twice as much power here – they can match it on pricing and beat it on performance. Which as long as AMD wants to hold the line here, this is a favorable matchup for AMD on a pure price/performance basis for current-generation games. The RX 570 is a last-generation midrange card, and the Turing architecture alone can’t help the low-end GTX 1650 completely make up that performance difference.

On a final note, AMD is offering their own bundle as well as part of their 50th anniversary celebration. For the RX 570 the company and its participating board partners are offering copies of bothThe Division 2 (Gold Edition) and World War Z, giving AMD a much stronger bundle than NVIDIA’s. So between card performance and game bundles, it's clear that AMD is trying very hard to counter the new GTX 1650.

Q2 2019 GPU Pricing Comparison
AMD Price NVIDIA
  $349 GeForce RTX 2060
Radeon RX Vega 56 $279 GeForce GTX 1660 Ti
Radeon RX 590 $219 GeForce GTX 1660
Radeon RX 580 (8GB) $189 GeForce GTX 1060 3GB
(1152 cores)
Radeon RX 570 $149 GeForce GTX 1650
TU117: The Smallest Turing Gets Volta’s Video Encoder?
Comments Locked

126 Comments

View All Comments

  • Yojimbo - Saturday, May 4, 2019 - link

    That's true, and I noted that in my original post. But the important thing is that the price/performance comparison should consider the total cost of ownership of the card. Ultimately, the value of any particular increment in performance is a matter of personal preference, though it is possible for someone to make a poor choice because he doesn't understand the situation well.
  • dmammar - Friday, May 3, 2019 - link

    This power consumption electricity savings debate has gone on too long. The math is not hard - the annual electricity cost is equal to (Watts / 1,000) x (hours used per day) x (365 days / year) x (cost per kWh)

    In my area, electricity costs $0.115/kWh so a rather excessive (for me) 3 hours of gaming every day of the year means that an extra 100W power consumption equals only $12.50 higher electricity cost every year.

    So for me, the electricity cost of the higher power consumption isn't even remotely important. I think most people are in the same boat, but run the numbers yourself and make your own decision. The only people who should care either live somewhere with expensive electricity and/or game way too much, in which case they should probably be using a better GPU.
  • Yojimbo - Saturday, May 4, 2019 - link

    How is $12.50 a year not remotely important? Would you say a card costing $25 less is a big deal? If one costs $150 and the other is $175 you would not consider that to be at all a consideration to your purchase?
  • OTG - Saturday, May 4, 2019 - link

    How IS $12.50/year even worth thinking about?
    That's less than an hour of work for most people, it's like 3 cents a day, you could pay for it by finding pennies on the sidewalk!
    PLUS you get much better performance! It's a faster card for a completely meaningless power increase.
    If your PSU doesn't have a six pin, get the 1650 I guess, otherwise the price is kinda silly.
  • Yojimbo - Saturday, May 4, 2019 - link

    I like the way you think. Whatever you buy, just buy it from me for $12.50 more than you could otherwise get it, because it's just not worth thinking about. What you say would be entirely reasonable if it didn't apply to every single purchase you make. I mean if a company comes along as says "Come on, buy this pen for $20. You're only going to buy one pen this year." would you do it? Do you ask the people who are saying NVIDIA's new cards are too expensive because they are $20 more expensive than the previous generation equivalents "How is $10 a year even worth thinking about?"

    Hey, if you are willing to throw money out the window if it is for electricity but not for anything else that's up to you, but you are making unreasonable decisions that harm yourself.
  • jardows2 - Monday, May 6, 2019 - link

    Using your logic, why don't we all just save bunches of money by using Intel Integrated graphics. Since the money we save on power usage is all that matters, we might as well make sure we are only using Mobile CPU's as well.
    What your paying for here is the improved gaming experience provided by the extra performance of the RX570. For many people, the real-world improvement in the gaming experience is worth the relatively low cost of energy usage. Realistically, the only reason to get one of these over the 570 is if your power supply cannot handle the RX570.
  • Sushisamurai - Tuesday, May 7, 2019 - link

    Holy crap man! The amount of electricity I spent to read this comment thread and that mount of keyboard clicks that've been consumed from my 70 million clicks from my mechanical keyboard from my total cost of ownership was totally worth reading and replying to this.
  • OTG - Tuesday, May 7, 2019 - link

    If you're pinching pennies that hard, you're probably better off not spending 4 hours a day gaming.
    Those games cost money, and you know what they say about time!
    Maybe even set the card mining when you're away, there are profits to be had even now.
  • WarlockOfOz - Saturday, May 4, 2019 - link

    Anyone calculating the total ownership cost of a video card in cents per day should also consider that the slightly higher performance of the 570 may allow it to last a few more months before justifying replacement, allowing the purchase price to be spread over a longer period.
  • Yojimbo - Sunday, May 5, 2019 - link

    "Anyone calculating the total ownership cost of a video card in cents per day should also consider that the slightly higher performance of the 570 may allow it to last a few more months before justifying replacement, allowing the purchase price to be spread over a longer period."

    Sure. Not that likely, though, because the difference isn't that great so what is more likely to affect the timing of upgrade is the card that becomes available. But at the moment, NVIDIA has a big gap between the 1650 and the 1660 so there aren't two more-efficient cards that bracket the 570 well from a price standpoint.

    Of course, some people apparently don't care about $25 at all so I don't understand why they should care about $25 more than that (for a total of 50) such that it would prevent them from getting a 1660, which has a performance that blows the 570 out of the water and would be a lot more likely to play a factor in the timing of a future upgrade.

Log in

Don't have an account? Sign up now