This content piece started with Buildzoid’s suggestion for us to install a custom VBIOS on our RX 570 for timing tuning tests. Our card proved temperamental with the custom VBIOS, so we ended up instead – for now – testing AMD’s built-in timing level options in the drivers. AMD’s GPU drivers have a drop-down option featuring “automatic,” “timing level 1,” and “timing level 2” settings for Radeon cards, all of which lack any formal definition within the drivers. We ran an RX 570 and a Vega 56 card through most of our tests with these timings options, using dozens of test passes across the 3DMark suite (for each line item) to minimize the error margins and help narrow-in the range of statistically significant results. We also ran “real” gaming workloads in addition to these 3DMark passes.

Were we to step it up, the next goal would be to use third-party tools to manually tune the memory timings, whether GDDR5 or HBM2, or custom VBIOSes on cards that are more stable. For now, we’ll focus on AMD’s built-in options.

This round-up is packed with news, although our leading two stories are based on rumors. After talking about Navi's potential reference or engineering design PCB and Intel's alleged Comet Lake plans, we'll dive into Super Micro's move away from China-based manufacturing, a global downtrend in chip sales, Ryzen and Epyc sales growth, Amazon EWS expansion to use more AMD instances, and more.

Show notes are below the embedded video, as always.

One of our most popular videos of yore talks about the GTX 960 4GB vs. GTX 960 2GB cards and the value of choosing one over the other. The discussion continues today, but is more focused on 3GB vs. 6GB comparisons, or 4GB vs. 8GB comparisons. Now, looking back at 2015’s GTX 960, we’re revisiting with locked frequencies to compare memory capacities. The goal is to look at both framerate and image quality to determine how well the 2GB card has aged versus how well the 4GB card has aged.

A lot of things have changed for us since our 2015 GTX 960 comparison, so these results will obviously not be directly comparable to the time. We’re using different graphics settings, different test methods, a different OS, and much different test hardware. We’ve also improved our testing accuracy significantly, and so it’s time to take all of this new hardware and knowledge and re-apply it to the GTX 960 2GB vs. 4GB debate, looking into whether there was really a “longevity” argument to be made.

NVIDIA’s GTX 1650 was sworn to secrecy, with drivers held for “unification” reasons up until actual launch date. The GTX 1650 comes in variants ranging from 75W to 90W and above, meaning that some options will run without a power connector while others will focus on boosted clocks, power target, and require a 6-pin connector. GTX 1650s start at $150, with this model costing $170 and running a higher power target, more overclocking headroom, and potentially better challenging some of NVIDIA’s past-generation products. We’ll see how far we can push the 1650 in today’s benchmarks, including overclock testing to look at maximum potential versus a GTX 1660. We’re using the official, unmodified GTX 1650 430.39 public driver from NVIDIA for this review.

We got our card two hours before product launch and got the drivers at launch, but noticed that NVIDIA tried to push drivers heavily through GeForce Experience. We pulled them standalone instead.

EA's Origin launcher has recently gained attention for hosting Apex Legends, one of the present top Battle Royale shooters, but is getting renewed focus as being an easy attack vector for malware. Fortunately, an update has already resolved this issue, and so the pertinent action would be to update Origin (especially if you haven't opened it in a while). Further news this week features the GTX 1650's rumored specs and price, due out allegedly on April 23. We also follow-up on Sony PlayStation 5 news, now officially confirmed to be working with a new AMD Ryzen APU and customized Navi GPU solution.

Show notes below the embedded video, for those preferring reading.

We’re still in China for our factory and lab tours, but we managed to coordinate with home base to get enough testing on the GTX 1660 done that a review became possible. Patrick ran the tests this time, then we just put the charts and script together from Dongguan, China.

This is a partner launch, so no NVIDIA direct sampling was done and, to our knowledge, no Founders Edition board will exist. Reference PCBs will exist, as always, but partners have control over most of the cooler design for this launch.

Our review will look at the EVGA GTX 1660 dual-fan model, which has an MSRP of $250 and lands $30 cheaper than the baseline GTX 1660 Ti pricing. The cheapest GTX 1660s will sell for about $220, but our $250 unit today has a higher power target allowance for overclocking and a better cooler. The higher power target is the most interesting, as overclocking performance can stretch upwards toward a GTX 1660 Ti at the $280 price-point.

We’ll get straight to the review today. Our focus will be on games, with some additional thermal and power tests toward the end. Again, as a reminder, we’re doing this remotely, so we don’t have as many non-gaming charts as normally, but we still have a complete review.

Our initial AMD Radeon VII liquid cooling mod was modified after the coverage went live. We ended up switching to a Thermaltake Floe 360 radiator (with different fans) due to uneven contact and manufacturing defects in the Alphacool GPX coldplate. Going with the Asetek cooler worked much better, dropping our thermals significantly and allowing increased overclocking and stock boosting headroom. The new drivers (19.2.3) also fixed most of the overclocking defects we originally found, making it possible to actually progress with this mod.

As an important foreword, note that overclocking with AMD’s drivers must be validated with performance at every step of the way. Configured frequencies are not the same as actual frequencies, so you might type “2030MHz” for core and get, for instance, 1950-2000MHz out. For this reason, and because frequency regularly misreports (e.g. “16000MHz”), it is critical that any overclock be validated with performance. Without validation, some “overclocks” can actually be bringing performance below stock while appearing to be boosted in frequency. This is very important for overclocking Radeon VII properly.

We recently revisited the AMD R9 290X from October of 2013, and now it’s time to look back at the GTX 780 Ti from November of 2013. The 780 Ti shipped for $700 MSRP and landed as NVIDIA’s flagship against AMD’s freshly-launched flagship. It was a different era: Memory capacity was limited to 3GB on the 780 Ti, memory frequency was a blazing 7Gbps, and core clock was 875MHz stock or 928MHz boost, using the old Boost 2.0 algorithm that kept a fixed clock in gaming. Overclocking was also more extensible, giving us a bigger upward punch than modern NVIDIA overclocking might permit. Our overclocks on the 780 Ti reference (with fan set to 93%) allowed it to exceed expected performance of the average partner model board, so we have a fairly full range of performance on the 780 Ti.

NVIDIA’s architecture has undergone significant changes since Kepler and the 780 Ti, one of which has been a change in CUDA core efficiency. When NVIDIA moved from Kepler to Maxwell, there was nearly a 40% efficiency gain when CUDA cores are processing input. A 1:1 Maxwell versus Kepler comparison, were such a thing possible, would position Maxwell as superior in efficiency and performance-per-watt, if not just outright performance. It is no surprise then that the 780 Ti’s 2880 CUDA cores, although high even by today’s standards (an RTX 2060 has 1920, but outperforms the 780 Ti), will underperform when compared to modern architectures. This is amplified by significant memory changes, capacity being the most notable, where the GTX 780 Ti’s standard configuration was limited to 3GB and ~7Gbps GDDR5.

Today, we’re reviewing the GTX 1660 Ti, whose name is going to trip us up for the entirety of its existence. The GTX 1660 Ti is NVIDIA’s mid-step between Pascal and Turing, keeping most of the Turing architectural changes to the SMs and memory subsystem, but dropping the official RTX support and RT cores in favor of a lower price. The EVGA GTX 1660 Ti XC that we’re reviewing today should have a list price of $280, sticking it between the $350 baseline of the RTX 2060 and the rough $200 price-point of modern 1060s, although sometimes that’s higher. For further reference, Vega 56 should now sell closer to $280, with the RX 590 still around the $260 range.

Apex Legends is one of the most-watched games right now and is among the top Battle Royale genre of games. Running on the Titanfall engine and with some revamped Titanfall assets, the game is a fast-paced FPS with relatively high poly count models and long view distances. For this reason, we’re benchmarking a series of GPUs to find the “best” video card for Apex Legends at each price category.

Our testing first included some discovery and research benchmarks, where we dug into various multiplayer zones and practice mode to try and find the most heavily loaded areas of the benchmark. We also unlocked FPS for this, so we aren’t going to bump against any 144FPS cap or limitation. This will help find which cards can play the game at max settings – or near-max, anyway.

Page 1 of 43

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge