Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"
First world problems, Steve. First world problems.
In some ways, AMD has become NVIDIA, and it’s not necessarily a bad thing. The way new Ryzen CPUs scale is behaviorally similar to the way GPU Boost 4.0 scales on GPUs, where simply lowering the silicon operating temperature will directly affect performance and clock speeds. Under complete, full stock settings, a CPU running colder will actually boost higher now; alternatively, if you’re a glass half-empty type, you could view it such that a CPU running hotter will thermally throttle. Either way, frequency is contingent upon thermals, and that’s important for users who want to maximize performance or pick the right case and CPU cooling combination. If you’re new to the space, the way it has traditionally worked is that CPUs will perform at one spec, with one set of frequencies, until hitting TjMax, or maximum Junction temperature. Ryzen 3000 is significantly different from past CPUs in this regard. Some excursions from this behavior do exist, but are a different behavior and are well-known. One such example would include Turbo Boost durations, which are explicitly set by the motherboard to limit the duration for which an Intel CPU can reach its all-core Turbo. This is a different matter entirely from frequency/cold scale.
An Intel CPU is probably the easiest example to use for pre-Ryzen 3000 behavior. With Intel, there are only two real parameters to consider: The Turbo boost duration limit, which we have a separate content piece on (linked above), and the power limit. If operating within spec, outside of the turbo duration limit of roughly 90-120 seconds, the CPU will stick to one all-core clock speed for the entirety of its workload. You could be running at 90 degrees or 40 degrees, it’ll be the same frequency. Once you hit TjMax, let’s say it’s 95 or 100 degrees Celsius, there’s either a multiplier throttle or a thermal shutdown, the choice between which will hinge upon how the motherboard is configured to respond to TjMax.
Silicon quality and the so-called silicon lottery are often discussed in the industry, but it’s rare for anyone to have enough sample size to actually demonstrate what those phrases mean in practice. We asked Gigabyte to loan us as many of a single model of video card as they could so that we could demonstrate the frequency variance card-to-card at stock, the variations in overclocking headroom, and actual gaming performance differences from one card to the next. This helps to more definitively strike at the question of how much silicon quality can impact a GPU’s performance, particularly when stock, and also looks at memory overclocking and range of FPS in gaming benchmarks with a highly controlled bench and a ton of test passes per device. Finally, we can see the theory of how much one reviewer’s GPU might vary from another’s when running initial review testing.
AMD’s biggest ally for the RTX launch was NVIDIA, as the company crippled itself with unimplemented features and generational price creep out the gate. With RTX Super, NVIDIA demonstrates that it has gotten the memo from the community and has driven down its pricing while increasing performance. Parts of the current RTX line will be phased-out, with the newer, better parts coming into play and pre-empting AMD’s Navi launch. The 2070 Super is priced at $500, $50 above the 5700 XT, and the on-paper specs should put it about equivalent with an RTX 2080 in performance; it’s even using the TU-104 RTX 2080 die, further reinforcing this likely position. The 2060 Super sees a better bin with more unlocked SMs on the GPU, improving compute capabilities and framebuffer capacity beyond the initial 2060. Both of these things spell an embarrassing scenario about to unfold for AMD’s Radeon VII card, but we’ll have to wait another week to see how it plays-out for the yet unreleased Navi RX cards. There may be hope yet for AMD’s new lineup, but the existing lineup will face existential challenges from the freshly priced and face-lifted RTX Super cards. Today, we’re reviewing the new RTX Super cards with a fully revamped GPU testing methodology.
The first question is this: Why the name “Super?” Well, we asked, and nobody knows the answer. Some say that Jensen burned an effigy of a leather jacket and the word “SUPER” appeared in the toxic fumes above, others say that it’s a self-contained acronym. All we know is that it’s called “Super.”
Rather than pushing out Ti updates that co-exist with the original SKUs, NVIDIA is replacing the 2080 and 2070 SKUs with the new Super SKUs, while keeping the 2060 at $350. This is a “damned if you do, damned if you don’t” scenario. By pushing this update, NVIDIA shows that it’s listening – either to consumers or to competition – by bringing higher performing parts to lower price categories. At the same time, people who recently bought RTX cards may feel burned or feel buyer’s remorse. This isn’t just a price cut, which is common, but a fundamental change to the hardware. The RTX 2070 Super uses TU104 for the GPU rather than TU106, bumping it to a modified 2080 status. The 2060 stays on TU106, but also sees changes to SMs active and memory capacity.
AMD’s X570 chipset marks the arrival of some technology that was first deployed on Epyc, although that was done through the CPU as there isn’t a traditional chipset. With the shift to PCIe 4, X570 motherboards have grown more complex than X370 and X470, furthered by difficulties cooling the higher power consumption of X570. All of these changes mean that it’s time to compare the differences between X370, X470, and X570 motherboard chipsets, hopefully helping newcomers to Ryzen understand the changes.
The persistence of AMD’s AM4 socket, still slated for life through 2020, means that new CPUs are compatible with older chipsets (provided the motherboard makers update BIOS for detection). It also means that older CPUs (like the reduced price R5 2600X) are compatible with new motherboards, if you for some reason ended up with that combination. The only real downside, aside from potential cost of the latter option, is that new CPUs on old motherboards will mean no PCIe Gen4 support. AMD is disabling it in AGESA at launch, and unless a motherboard manufacturers finds the binary switch to flip in AGESA, it’ll be off for good. Realistically, this isn’t all that relevant: Most users will never touch the bandwidth of Gen4 for this round of products (in the future, maybe), and so the loss of running a new CPU on an old motherboard may be outweighed by the cost savings of keeping an already known-good board, provided the VRM is sufficient.
AMD’s technical press event bore information for both AMD Ryzen and AMD Navi, including overclocking information for Ryzen, Navi base, boost, and average clocks, architectural information and block diagrams, product-level specifications, and extreme overclocking information for Ryzen with liquid nitrogen. We understand both lines better now than before and can brief you on what AMD is working on. We’ll start with Navi specs, die size, and top-level architectural information, then move on to Ryzen. AMD also talked about ray tracing during its tech day, throwing some casual shade at NVIDIA in so doing, and we’ll also cover that here.
First, note that AMD did not give pricing to the press ahead of its livestream at E3, so this content will be live right around when the prices are announced. We’ll try to update with pricing information as soon as we see it, although we anticipate our video’s comments section will have the information immediately. UPDATE: Prices are $450 for the RX 5700 XT, $380 for the RX 5700.
AMD’s press event yielded a ton of interesting, useful information, especially on the architecture side. There was some marketing screwery in there, but a surprisingly low amount for this type of event. The biggest example was taking a thermographic image of two heatsinks to try and show comparative CPU temperature, even though the range was 23 to 27 degrees, which makes the delta look astronomically large despite being in common measurement error. Also, the heatsink actually should be hot because that means it’s working, and taking a thermographic image of a shiny metal object means you’re more showing reflected room temperature or encountering issues with emissivity, and ultimately they should just be showing junction temperature, anyway. This was our only major gripe with the event -- otherwise, the information was technical, detailed, and generally free of marketing BS. Not completely free of it, but mostly. The biggest issue with the comparison was the 28-degree result that exited the already silly 23-27 degree range, making it look like 28 degrees was somehow massively overheating.
Let’s start with the GPU side.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.