Steve Burke

Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

Today, we’re reviewing the GTX 1660 Ti, whose name is going to trip us up for the entirety of its existence. The GTX 1660 Ti is NVIDIA’s mid-step between Pascal and Turing, keeping most of the Turing architectural changes to the SMs and memory subsystem, but dropping the official RTX support and RT cores in favor of a lower price. The EVGA GTX 1660 Ti XC that we’re reviewing today should have a list price of $280, sticking it between the $350 baseline of the RTX 2060 and the rough $200 price-point of modern 1060s, although sometimes that’s higher. For further reference, Vega 56 should now sell closer to $280, with the RX 590 still around the $260 range.

We won't be writing an article for this one, so just wanted to run a quick post on our new DLSS comparison in Battlefield V. This was easier to relegate to video format, seeing as it required more detailed visual comparisons than anything else. Some charts are present, but the goal is to compare DLSS on vs. off across two GPUs: The RTX 2080 Ti and the RTX 2060, each of which has different allowances for DLSS enablement.

The RTX 2060 can run DLSS at 1080p or 1440p, whereas the RTX 2080 Ti can only run DLSS at 4K, as an FPS which is too high will not allow for DLSS processing to complete before frame present (and so the 2080 Ti cannot step lower than 4K). Comparisons primarily try to find where the major upsides might be with DLSS, and they seem to mostly exist with very thin objects that have limited geometry in far distances, where DLSS can create a smoother image and eliminate some of the "marching ants" effect. On the flip-side, DLSS seems to introduce some blur to the image and doesn't outperform natively running at the lower resolution instead.

Apex Legends is one of the most-watched games right now and is among the top Battle Royale genre of games. Running on the Titanfall engine and with some revamped Titanfall assets, the game is a fast-paced FPS with relatively high poly count models and long view distances. For this reason, we’re benchmarking a series of GPUs to find the “best” video card for Apex Legends at each price category.

Our testing first included some discovery and research benchmarks, where we dug into various multiplayer zones and practice mode to try and find the most heavily loaded areas of the benchmark. We also unlocked FPS for this, so we aren’t going to bump against any 144FPS cap or limitation. This will help find which cards can play the game at max settings – or near-max, anyway.

Metro: Exodus is the next title to include NVIDIA RTX technology, leveraging Microsoft’s DXR. We already looked at the RTX implementation from a qualitative standpoint (in video), talking about the pros and cons of global illumination via RTX, and now we’re back to benchmark the performance from a quantitative standpoint.

The Metro series has long been used as a benchmarking standard. As always, with a built-in benchmark, one of the most important things to look at is the accuracy of that benchmark as it pertains to the “real” game. Being inconsistent with in-game performance doesn’t necessarily invalidate a benchmark’s usefulness, though, it’s just that the light in which that benchmark is viewed must be kept in mind. Without accuracy to in-game performance, the benchmark tools mostly become synthetic benchmarks: They’re good for relative performance measurements between cards, but not necessarily absolute performance. That’s completely fine, too, as that’s mostly what we look for in reviews. The only (really) important thing is that performance scaling is consistent between cards in both pre-built benchmarks and in-game benchmarks.

Finding something to actually leverage the increased memory bandwidth of Radeon VII is a challenge. Few games will genuinely use more memory than what’s found on an RTX 2080, let alone 16GB on the Radeon VII, and most VRAM capacity utilization reporting is wildly inaccurate as it only reports allocated memory and not necessarily used memory. To best benchmark the potential advantages of Radeon VII, which would primarily be relegated to memory bandwidth, we set up a targeted feature test to look at anti-aliasing and high-resolution benchmarks. Consider this an academic exercise on Radeon VII’s capabilities.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge