NVIDIA’s GTX 1650 was sworn to secrecy, with drivers held for “unification” reasons up until actual launch date. The GTX 1650 comes in variants ranging from 75W to 90W and above, meaning that some options will run without a power connector while others will focus on boosted clocks, power target, and require a 6-pin connector. GTX 1650s start at $150, with this model costing $170 and running a higher power target, more overclocking headroom, and potentially better challenging some of NVIDIA’s past-generation products. We’ll see how far we can push the 1650 in today’s benchmarks, including overclock testing to look at maximum potential versus a GTX 1660. We’re using the official, unmodified GTX 1650 430.39 public driver from NVIDIA for this review.

We got our card two hours before product launch and got the drivers at launch, but noticed that NVIDIA tried to push drivers heavily through GeForce Experience. We pulled them standalone instead.

EA's Origin launcher has recently gained attention for hosting Apex Legends, one of the present top Battle Royale shooters, but is getting renewed focus as being an easy attack vector for malware. Fortunately, an update has already resolved this issue, and so the pertinent action would be to update Origin (especially if you haven't opened it in a while). Further news this week features the GTX 1650's rumored specs and price, due out allegedly on April 23. We also follow-up on Sony PlayStation 5 news, now officially confirmed to be working with a new AMD Ryzen APU and customized Navi GPU solution.

Show notes below the embedded video, for those preferring reading.

Industry news isn't always as appealing as product news for some of our audience, but this week of industry news is interesting: For one, Tom Petersen, Distinguished Engineer at NVIDIA, will be moving to Intel; for two, ASUS accidentally infected its users with malware after previously being called-out for poor security practices. Show notes for the news below the video embed, for those who prefer written format.

Hardware news is busy this week, as it always is, but we also have some news of our own. Part of GN's team will be in Taiwan and China over the next few weeks, with the rest at home base taking care of testing. For the Taiwan and China trip, we'll be visiting numerous factories for tour videos, walkthroughs, and showcases of how products are made at a lower-level. We have several excursions to tech landmarks also planned, so you'll want to check back regularly as we make this special trip. Check our YT channel daily for uploads. The trip to Asia will likely start its broadcast around 3/6 for us.

We recently revisited the AMD R9 290X from October of 2013, and now it’s time to look back at the GTX 780 Ti from November of 2013. The 780 Ti shipped for $700 MSRP and landed as NVIDIA’s flagship against AMD’s freshly-launched flagship. It was a different era: Memory capacity was limited to 3GB on the 780 Ti, memory frequency was a blazing 7Gbps, and core clock was 875MHz stock or 928MHz boost, using the old Boost 2.0 algorithm that kept a fixed clock in gaming. Overclocking was also more extensible, giving us a bigger upward punch than modern NVIDIA overclocking might permit. Our overclocks on the 780 Ti reference (with fan set to 93%) allowed it to exceed expected performance of the average partner model board, so we have a fairly full range of performance on the 780 Ti.

NVIDIA’s architecture has undergone significant changes since Kepler and the 780 Ti, one of which has been a change in CUDA core efficiency. When NVIDIA moved from Kepler to Maxwell, there was nearly a 40% efficiency gain when CUDA cores are processing input. A 1:1 Maxwell versus Kepler comparison, were such a thing possible, would position Maxwell as superior in efficiency and performance-per-watt, if not just outright performance. It is no surprise then that the 780 Ti’s 2880 CUDA cores, although high even by today’s standards (an RTX 2060 has 1920, but outperforms the 780 Ti), will underperform when compared to modern architectures. This is amplified by significant memory changes, capacity being the most notable, where the GTX 780 Ti’s standard configuration was limited to 3GB and ~7Gbps GDDR5.

Today, we’re reviewing the GTX 1660 Ti, whose name is going to trip us up for the entirety of its existence. The GTX 1660 Ti is NVIDIA’s mid-step between Pascal and Turing, keeping most of the Turing architectural changes to the SMs and memory subsystem, but dropping the official RTX support and RT cores in favor of a lower price. The EVGA GTX 1660 Ti XC that we’re reviewing today should have a list price of $280, sticking it between the $350 baseline of the RTX 2060 and the rough $200 price-point of modern 1060s, although sometimes that’s higher. For further reference, Vega 56 should now sell closer to $280, with the RX 590 still around the $260 range.

Apex Legends is one of the most-watched games right now and is among the top Battle Royale genre of games. Running on the Titanfall engine and with some revamped Titanfall assets, the game is a fast-paced FPS with relatively high poly count models and long view distances. For this reason, we’re benchmarking a series of GPUs to find the “best” video card for Apex Legends at each price category.

Our testing first included some discovery and research benchmarks, where we dug into various multiplayer zones and practice mode to try and find the most heavily loaded areas of the benchmark. We also unlocked FPS for this, so we aren’t going to bump against any 144FPS cap or limitation. This will help find which cards can play the game at max settings – or near-max, anyway.

The Verge misstepped last week and ended up at the receiving end of our thoughts on the matter, but after a response by The Verge, we're back for one final response. Beyond that, normal hardware news ensues: We're looking at MIT's exciting research into the CPU space, like with advancements in diamond as potential processor material, and also looking at TSMC's moves to implement 7nm EUV.

Show notes are below the embedded video:

Metro: Exodus is the next title to include NVIDIA RTX technology, leveraging Microsoft’s DXR. We already looked at the RTX implementation from a qualitative standpoint (in video), talking about the pros and cons of global illumination via RTX, and now we’re back to benchmark the performance from a quantitative standpoint.

The Metro series has long been used as a benchmarking standard. As always, with a built-in benchmark, one of the most important things to look at is the accuracy of that benchmark as it pertains to the “real” game. Being inconsistent with in-game performance doesn’t necessarily invalidate a benchmark’s usefulness, though, it’s just that the light in which that benchmark is viewed must be kept in mind. Without accuracy to in-game performance, the benchmark tools mostly become synthetic benchmarks: They’re good for relative performance measurements between cards, but not necessarily absolute performance. That’s completely fine, too, as that’s mostly what we look for in reviews. The only (really) important thing is that performance scaling is consistent between cards in both pre-built benchmarks and in-game benchmarks.

Recapping hardware news for the past week (not counting the major Vega launch), major items include AMD's marketshare increase, NVIDIA's loss of Softbank's large investment, Intel's Itanium getting retired, and Thermaltake's new legal battle with Mayhems. Thermaltake is seeking to expand its coolant line with "Pastel" coolants, something to which Mayhems holds a UK-based trademark and years of prior products.

Show notes below the video embed.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge