The past week of hardware news primarily centers around nVidia and AMD, both of whom are launching new GPUs under similar names to existing lines. This struck a chord with us, because the new GT 1030 silently launched by nVidia follows the exact same patterns AMD has taken with its rebranded RX 460s as “RX 560s,” despite having significant hardware changes underneath.

To be very clear, we strongly disagree with creating a new, worse product under the same product name and badging as previously. It is entirely irrelevant how close that product is in performance to the original - it’s not the same product, and that’s all that matters. It deserves a different name.

We spend most of the news video ranting about GPU naming by both companies, but also include a couple of other industry topics. Find the show notes below, or check the video for the more detailed story.

Find the show notes below, or watch the video:

Recent advancements in graphics processing technology have permitted software and hardware vendors to collaborate on real-time ray tracing, a long-standing “holy grail” of computer graphics. Ray-tracing has been used for a couple of decades now, but has always been used in pre-rendered graphics – often in movies or other video playback that doesn’t require on-the-fly processing. The difference with going real-time is that we’re dealing with sparse data, and making fewer rays look good (better than standard rasterization, especially) is difficult.

NVidia has been beating this drum for a few years now. We covered nVidia’s ray-tracing keynote at ECGC a few years ago, when the company’s Tony Tamasi projected 2015 as the year for real-time ray-tracing. That obviously didn’t fully realize, but the company wasn’t too far off. Volta ended up providing some additional leverage to make 60FPS, real-time ray-tracing a reality. Even still, we’re not quite there with consumer hardware. Epic Games and nVidia have been demonstrating real-time ray-tracing rendering with four Titan V100 GPUs lately, functionally $12,000 worth of Titan Vs, and that’s to achieve a playable real-time framerate with the ubiquitous “Star Wars” demo.

This week's hardware news recap follows GTC 2018, where we had a host of nVidia and GPU-adjacent news to discuss. That's all recapped heavily in the video portion, as most of it was off-the-top reporting just after the show ended. For the rest, we talk 4K 144Hz displays, 3DMark's raytracing demo, AMD's Radeon Rays, the RX Vega 56 Red Devil card, and CTS Labs updates.

As for this week, we're back to lots of CPU testing, as we've been doing for the past few weeks now. We're also working on some secret projects that we'll more fully reveal soon. For the immediate future, we'll be at PAX East on Friday, April 6, and will be on a discussion panel with Bitwit Kyle and Corsair representatives. We're planning to record the panel for online viewing.

Revealed to press under embargo at last week’s GTC, the nVidia-hosted GPU Technology Conference, nVidia CEO Jensen Huang showcased the new TITAN W graphics card. The Titan W is nVidia’s first dual-GPU card in many years, and comes after the compute-focused Titan V GPU from 2017.

The nVidia Titan W graphics card hosts two V100 GPUs and 32GB of HBM2 memory, claiming a TDP of 500W and a price of $8,000.

“I’m really just proving to shareholders that I’m healthy,” Huang laughed after his fifth consecutive hour of talking about machine learning. “I could do this all day – and I will,” the CEO said, with a nod to PR, who immediately locked the doors to the room.

At GTC 2018, we learned that SK Hynix’s GDDR6 memory is bound for mass production in 3 months, and will be featured on several upcoming nVidia products. Some of these include autonomous vehicle components, but we also learned that we should expect GDDR6 on most, if not all, of nVidia’s upcoming gaming architecture cards.

Given a mass production timeline of June-July for GDDR6 from SK Hynix, assuming Hynix is a launch-day memory provider, we can expect next-generation GPUs to become available after this timeframe. There still needs to be enough time to mount the memory to the boards, after all. We don’t have a hard date for when the next-generation GPU lineup will ship, but from this information, we can assume it’s at least 3 months away -- possibly more. Basically, what we know is that, assuming Hynix is a launch vendor, new GPUs are nebulously >3 months away.

If, to you, the word "unpredictable" sounds like a positive attribute for a graphics card, ASRock has something you may want. ASRock used words like “unpredictable” and “mysterious” for its new Phantom Gaming official trailer, two adjectives used to describe an upcoming series of AMD Radeon-equipped graphics cards. This is ASRock’s first time entering the graphics card space, where the company’s PCB designers will be faced with new challenges for AMD RX Vega GPUs (and future architectures).

The branding is for “Phantom” graphics cards, and the first-teased card appears to be using a somewhat standard dual-axial fan design with a traditional aluminum finstack and ~6mm heatpipes. Single 8-pin header is shown in the rendered teaser card, but as a render, we’re not sure what the actual product will look like.

The latest Ask GN brings us to episode #70. We’ve been running this series for a few years now, but the questions remain top-notch. For this past week, viewers asked about nVidia’s “Ampere” and “Turing” architectures – or the rumored ones, anyway – and what we know of the naming. For other core component questions, Raven Ridge received a quick note on out-of-box motherboard support and BIOS flashing.

Non-core questions pertained to cooling, like the “best” CLCs when normalizing for fans, or hybrid-cooled graphics VRM and VRAM temperatures. Mousepad engineering got something of an interesting sideshoot, for which we recruited engineers at Logitech for insight on mouse sensor interaction with surfaces.

More at the video below, or find our Patreon special here.

As part of our new and ongoing “Bench Theory” series, we are publishing a year’s worth of internal-only data that we’ve used to drive our 2018 GPU test methodology. We haven’t yet implemented the 2018 test suite, but will be soon. The goal of this series is to help viewers and readers understand what goes into test design, and we aim to underscore the level of accuracy that GN demands for its publication. Our first information dump focused on benchmark duration, addressing when it’s appropriate to use 30-second runs, 60-second runs, and more. As we stated in the first piece, we ask that any content creators leveraging this research in their own testing properly credit GamersNexus for its findings.

Today, we’re looking at standard deviation and run-to-run variance in tested games. Games on bench cycle regularly, so the purpose is less for game-specific standard deviation (something we’re now addressing at game launch) and more for an overall understanding of how games deviate run-to-run. This is why conducting multiple, shorter test passes (see: benchmark duration) is often preferable to conducting fewer, longer passes; after all, we are all bound by the laws of time.

Looking at statistical dispersion can help understand whether a game itself is accurate enough for hardware benchmarks. If a game is inaccurate or varies wildly from one run to the next, we have to look at whether that variance is driver-, hardware-, or software-related. If it’s just the game, we must then ask the philosophical question of whether it’s the game we’re testing, or if it’s the hardware we’re testing. Sometimes, testing a game that has highly variable performance can still be valuable – primarily if it’s a game people want to play, like PUBG, despite having questionable performance. Other times, the game should be tossed. If the goal is a hardware benchmark and a game is behaving in outlier fashion, and also largely unplayed, then it becomes suspect as a test platform.

APU reviews have historically proven binary: Either it’s better to buy a dGPU and dirt-cheap CPU, or it’s actually a good deal. There is zero room for middle-ground in a market that’s targeting $150-$180 purchases. There’s no room to be wishy-washy, and no room for if/but/then arguments: It’s either better value than a dGPU + CPU, or it’s not worthwhile.

Preceding our impending Raven Ridge 2400G benchmarks, we decided to test the G4560 and R3 1200 with the best GPU money can buy – because it’s literally the only GPU you can buy right now. That’d be the GT 1030. Coupled with the G4560 (~$72), we land at ~$160 for both parts, depending on the momentary fluctuations of retailers. With the R3 1200, we land at about $180 for both. The 2400G is priced at $170, or thereabouts, and lands between the two.

(Note: The 2400G & 2200G appear to already be listed on retailers, despite the fact that, at time of writing, embargo is still on)

Our latest Ask GN episode talks methodology and benchmarking challenges with GPU boost, GDDR6 availability, "mining" on AMD's Radeon SSG, and more. This is also our first episode that comes with a new accompaniment, released in the form of a Patreon-only Ask GN. The separate video is  visible to Patreon backers, and answers a couple extra questions that were submitted via the Patreon Discord.

As usual, timestamps are provided below the embedded video. The major focus is on some quick GDDR6 news, then some discussion on GPU benchmarking approaches.

Find out more below:

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge