Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"
First world problems, Steve. First world problems.
Today, we’re reviewing the NVIDIA Titan RTX for overclocking, gaming, thermal, and acoustic performance, looking at the first of two cards in the lab. We have a third card arriving to trade for one defective unit, working around the 1350MHz clock lock we discovered, but that won’t be until after this review goes live. The Titan RTX costs $2500, outbidding the RTX 2080 Ti by about 2x, but only enables an additional 4 streaming multiprocessors. With 4 more SMs and 256 more lanes, there’s not much performance to be gained in gaming scenarios. The big gains are in memory-bound applications, as the Titan RTX has 24GB of GDDR6, a marked climb from the 11GB on an RTX 2080 Ti.
An example of a use case could be machine learning or deep learning, or more traditionally, 3D graphics rendering. Some of our in-house Blender project files use so much VRAM that we have to render instead with the slower CPU (rather than CUDA acceleration), as we’ll run out of the 11GB framebuffer too quickly. The same is true for some of our Adobe Premiere video editing projects, where our graph overlays become so complex and high-resolution that they exceed the memory allowance of a 1080 Ti. We are not testing either of these use cases today, though, and are instead focusing our efforts on the gaming and enthusiast market. We know that this is also a big market, and plenty of people want to buy these cards simply because “it’s the best,” or because “most expensive = most best.” We’ll be looking at how much the difference really gets you, with particular interest in thermal performance pursuant to the removal of the blower cooler.
Finally, note that we were stuck at 1350MHz with one of our two samples, something that we’ve worked with NVIDIA to research. The company now has our defective card and has traded us with a working one. We bought the defective Titan RTX, so it was a “real” retail sample. We just wanted to help NVIDIA troubleshoot the issue, and so the company is now working with it.
It’s been quiet on the website for the past week as we’ve been traveling and ramping our testing operations. Video took a lot of time this week, as we were working on our newest Disappointment PC build, following our highly popular 2017 version. The Disappointment PC (which now has an accompanying shirt on our store) is a collection of the most, well, disappointing parts of 2018, all in one box. Like last year, we spent a lot of time to make a fun and different intro, taking a short film approach with a horror slant. Last year, it was a haunted Vega FE card.
Separately, we wanted to let you all know (on the article side) that we are working hard to revamp the website. We hope to re-launch sometime in the next month or two, if not much sooner, and implement a better back-end editing system for writers to work on. Our goal is to really expand article capabilities and output by end of first quarter 2019, but to put the systems in place by end of year. Personally speaking, the website is where I started, and the growth of GN makes it hard to do high-quality articles every day while also putting out high-quality video, managing a team, and running the business. I still prefer writing the articles, but I need some assistance from the rest of the team. Overhauling the site will enable that, and we’re hugely excited for it.
Anyway, without further delay, here’s the new Disappointment PC build. We’ll leave this one to video, as the first 3 minutes are what make it special. If you’d like to support our efforts, please consider picking up one of the Disappointment Build shirts on the store.
The comments section of our Walmart case review and system review tell the story of what people think of Great Wall: everyone is expecting a fire, as the shell of the PSU is uninspiring, its rating sticker is lacking some metrics (maximum 12V capabilities, for example), and the brand isn’t familiar to a western audience. The funny thing is that this would be sort of similar to hearing “Asetek” for the first time, then making fun of it for being foreign to the market. Asetek supplies almost all of the closed-loop liquid coolers currently popular in North America, but never sticks its own branding on those. Great Wall is also a supplier, including being a supplier to brands viewed generally positively in the Western market.
To be fair, everything about the Great Wall 500W 80 Plus PSU does look like a cheap power supply – and it is cheap – but there’s nothing that should indicate this is an exploding power supply. Great Wall’s association with Walmart here is probably hurting their brand more than the inverse, funny enough, but we’ll be digging into that today.
We previously mentioned that Great Wall actually is a supplier and makes PSUs for Corsair, for instance, as discussed in our Walmart case review. It’s uncommon to find Great Wall PSUs unbranded, and this one didn’t even have the maximum 12V capabilities listed, so this unit did attract criticism from the community. What we’re here to do is test whether it’s deserving of that criticism, using our power supply testing setup to benchmark efficiency, ripple, and over-current protections.
The Intel i7-2600K is arguably one of the most iconic products released by Intel in the last decade, following-up the seminal Nehalem CPUs with major architectural improvements in Sandy Bridge. The 2600K was a soldered CPU with significant performance uplift over Nehalem 930s, and launched when AMD’s Phenom II X6 CPUs were already embattled with the previous Intel architecture. We revisited these CPUs last year, but wanted to come back around to the 2600K in 2018 to see if it’s finally time to upgrade for hangers-on to the once-king CPU.
Our original Intel i7-2600K revisit (2017) can be found here, but we had a completely different test methodology at the time. The data is not at all comparable.
The 2600K CPU was manufactured starting around 2009-2010, launching alongside the Intel Sandy Bridge 2nd Gen Core i-Series of CPUs. This launch followed Nehalem, which challenged the Phenom II X6’s appeal in a heated market. Sandy Bridge launched and has remained a well-respected, nearly idolized CPU since its launch. Intel made tremendous gains over Nehalem and hasn’t quite recaptured that level of per-core increase since. For everyone still on Sandy Bridge and the i7-2600K (or i7-2700K), we wanted to revisit the CPUs and benchmark them in 2018. These 2018 i7-2600K benchmarks compare against Ryzen (R7 2700 and others), the i7-8700K, and the i9-9900K, alongside several other CPUs. For anyone with a 2700K, it’s mostly the same thing, just 100MHz faster.
The AMD Athlon 200GE CPU enters our benchmarking charts today, but we’re reviewing this one with a twist: For this benchmark, we’re testing the CPU entirely as a CPU, ignoring its integrated graphics out of curiosity to see how the $55 part does when coupled with a discrete GPU. To further this, we overclocked this supposedly locked CPU to 3.9GHz using multiplier overclocking, which is disabled by AMD on most boards likely for product segmentation of future 200-series parts. In this instance, the 200GE at 3.9GHz posts significantly improved numbers over stock, making it a candidate to replace the retired price position once held by the Intel Pentium CPUs, at least, up until the 14nm shortage.
In the past, the Intel G3258 and successor G4560 stood as affordable options for ultra-budget builds that were still respectable at gaming tasks. The Pentium G5000 series – including the G5400 and G5600 (in this today’s benchmark) – has skyrocketed in price and dwindled in availability. The G5600 and G5400 alike are in the realm of $100, depending on when you check pricing, with the G5400 often ending up more expensive than the G5600. A lot of this is due to demand, but supply is also weak with the ongoing 14nm shortage. Intel is busy allocating that fab space to other products, minimizing the amount of Pentium G CPUs on market and allowing retailers control to boost prices and meet what demand will pay. This has left a large hole in the market of low-end CPU + low-end dGPU solutions, and that’s a hole which AMD may be able to fill with its Athlon 200GE solution, which had a launch MSRP of $55.
Unlike Ryzen proper chips, the 200GE includes an IGP (Vega graphics) that enables it as a fully standalone part once popped into a motherboard; however, we think its IGP is too weak for most of our normal testing, and we know it’d underperform versus the R3 2200G. The G4560-style market is one we like to look at, so we decided to test the 200GE as an ultra-budget replacement for coupling alongside a low-end dGPU, e.g. a GTX 1050 or RX 550/560. If the CPU holds up against our standardized test battery, it’ll work when coupled with a low-end GPU.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.