Today's video showed some of the process of delidding the i9-7900X -- again, following our Computex delid -- and learning how to use liquid metal. It's a first step, and one that we can learn from. The process has already been applied toward dozens of benchmarks, the charts for which are in the creation stage right now. We'll be working on the 7900X thermal and power content over the weekend, leading to a much greater content piece thereafter. It'll all be focused on thermals and power.
As for the 7900X, the delid was fairly straight forward: We used Der8auer's same Delid DieMate tool that we used at Computex, but now with updated hardware. A few notes on this: After the first delid, we learned that the "clamp" (pressing vertically) is meant to reseal and hold the IHS + substrate still. It is not needed for the actual delid process, so that's one of the newly learned aspects of this. The biggest point of education was the liquid metal application process, as LM gets everywhere and spreads sufficiently without anything close to the size of 'blob' you'd use for TIM.
While traveling, the major story that unfolded – and then folded – pertained to the alleged unlocking of Vega 56 shaders, permitting the cards to turn into a “Vega 58” or “Vega 57,” depending. This ultimately was due to a GPU-Z reporting bug, and users claiming increases in performance hadn’t normalized for the clock change or higher power budget. Still, the BIOS flash will modify the DPM tables to adjust for higher clocks and permit greater HBM2 voltage to the memory. Of these changes, the latter is the only real, relevant change – clocks can be manually increased on V56, and the core voltage remains the same after a flash. Powerplay tables can be used to bypass BIOS power limits on V56, though a flash to V64 BIOS permits higher power budget.
Even with all this, it’s still impossible (presently) to flash a modified, custom BIOS onto Vega. We tried this upon review of Vega 56, finding that the card was locked-down to prevent modding. This uses an on-die security coprocessor, relegating our efforts to powerplay tables. Those powerplay tables did ultimately prove successful, as we recently published.
Our Destiny 2 GPU benchmark was conducted alongside our CPU benchmark, using many of the same learnings from our research for the GPU bench. For GPU testing, we found Destiny 2 to be remarkably consistent between multiplayer and campaign performance, scaling all the way down to a 1050 Ti. This remained true across the campaign, which performed largely identically across all levels, aside from a single level with high geometric complexity and heavy combat. We’ll recap some of that below.
For CPU benchmarking, GN’s Patrick Lathan used this research (starting one hour after the GPU bench began) to begin CPU tests. We ultimately found more test variance between CPUs – particularly at the low-end – when switching between campaign and multiplayer, and so much of this content piece will be dedicated to the research portion behind our Destiny 2 CPU testing. We cannot yet publish this as a definitive “X vs. Y CPU” benchmark, as we don’t have full confidence in the comparative data given Destiny 2’s sometimes nebulous behaviors.
For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:
Since AMD’s high-core-count Ryzen lineup has entered the market, there seems to be an argument in every comment thread about multitasking and which CPUs handle it better. Our clean, controlled benchmarks don’t account for the demands of eighty browser tabs and Spotify running, and so we get constant requests to do in-depth testing on the subject. The general belief is that more threads are better able to handle more processes, a hypothesis that would increasingly favor AMD.
There are a couple reasons we haven’t included tests like these all along: first, “multitasking” means something completely different to every individual, and second, adding uncontrolled variables (like bloatware and network-attached software) makes tests less scientific. Originally, we hoped this article would reveal any hidden advantages that might emerge between CPUs when adding “multitasking” to the mix, but it’s ended up as a thorough explanation of why we don’t do benchmarks like this. We’re using the R3 1200 and G4560 to primarily run these trials.
This is the kind of testing we do behind-the-scenes to build a new test plan, but often don’t publish. This time, however, we’re publishing the trials of finding a multitasking benchmark that works. The point of publishing the trials is to demonstrate why it’s hard to trust “multitasking” tests, and why it’s hard to conduct them in a manner that’s representative of actual differences.
In listening to our community, we’ve learned that a lot of people seem to think Discord is multitasking, or that a Skype window is multitasking. Here’s the thing: If you’re running Discord and a game and you’re seeing an impact to “smoothness,” there’s something seriously wrong with the environment. That’s not even remotely close to enough of a workload to trouble even a G4560. We’re not looking at such a lightweight workload here, and we’re also not looking at the “I keep 100 tabs of Chrome open” scenarios, as that’s wholly unreliable given Chrome’s unpredictable caching and behaviors. What we are looking at is 4K video playback while gaming and bloatware while gaming.
In this piece, the word “multitasking” will be used to describe “running background software while gaming.” The term "bloatware" is being used loosely to easily describe an unclean operating system with several user applications running in the background.
This week’s hardware news recap goes over some follow-up AMD coverage, closes the storyline on Corsair’s partial acquisition, and talks new products and industry news. We open with AMD RX Vega mining confirmations and talk about the “packs” – AMD’s discount bundling supposed to help get cards into the hands of gamers.
The RX Vega discussion is mostly to confirm an industry rumor: We’ve received reports from contacts at AIB partners that RX Vega will be capable of mining at 70MH/s, which is something around double current RX 580 numbers. This will lead to more limited supply of RX Vega cards, we’d suspect, but AMD’s been trying to plan for this with their “bundle packs” – purchasers can spend an extra $100 to get discounts. Unfortunately, nothing says those discounts must be spent, and an extra $100 isn’t going to stop miners who are used to paying 2x prices, anyway.
Show notes below.
Our recent R7 1700 vs. i7-7700K streaming benchmarks came out in favor of the 1700, as the greater core count made it far easier to handle the simultaneous demands of streaming and gameplay without any overclocking or fiddling with process priority. Streaming isn’t the whole story, of course, and there are many situations (i.e. plain old gaming) where speed is a more valuable resource than sheer number of threads, as seen in our original 1700 review.
Today, we’re testing the R7 1700 and i7-7700K at 1440p 144Hz. We know the i7-7700K is a leader in gaming performance from our earlier CPU-bottlenecked 1080p testing; that isn’t the point here. We’ve also pitted these chips against each other in VR testing, where our conclusion was that GPU choice mattered far more, since both CPUs can deliver 90FPS equally well (and were effectively identical). This newest test is less of a competition and more of a “can the 1700 do it too” scenario. The 1700 has features that make it attractive for casual streaming or rendering, but that doesn’t mean customers want to sacrifice smooth 144Hz in pure gaming scenarios. As we explain thoroughly in the below video, there are different uses for different CPUs; it’s not quite as simple as “that one’s better,” and more accurately boils down to “that one’s better for this specific task, provided said task is your biggest focus.” Maybe that’s the R7 1700 for streaming while gaming, maybe that’s the 7700K for gaming -- but what we haven’t tested is if the 1700 can keep up at 144Hz with higher quality settings. We put to test media statements (including our own) that the 1700 should be “better at streaming,” finding that it is. It is now time to put to test the statements that the 7700K is “better at 144Hz” gaming.
This series is an ongoing venture in our follow-up tests to illustrate that, yes, the two CPUs can both exist side-by-side and can be good at different things. There’s no shame in being a leader in one aspect but not the other, and it’s just generally impossible given current manufacturing and engineering limitations, anyway. The 7700K was the challenger in the streaming benchmarks, and today it will be challenged by the inbound R7 1700 for 144Hz gaming.
People like to make things a bloodbath, but just again to remind everyone: This is less of a “versus” scenario and more of a “can they both do it?” scenario.
The Ryzen 3 CPUs round-out AMD’s initial Ryzen offering, with the last remaining sector covered by an impending Threadripper roll-out. Even before digging into the numbers of these benchmarks, AMD’s R3 & R5 families seem to have at least partly influenced competitive pricing: The Intel i3-7350K is now $150, down from its $180 perch. We liked the 7350K as a CPU and were excited about its overclocking headroom, but found its higher price untenable for an i3 CPU given then-neighboring i5 alternatives.
Things have changed significantly since the i3-7350K review. For one, Ryzen now exists on market – and we’ve awarded the R5 1600X with an Editor’s Choice award, deferring to the 1600X over the i5-7600K in most cases. The R3 CPUs are next on the block, and stand to challenge Intel’s freshly price-reduced i3-7350K in budget gaming configurations.
“Good for streaming” – a phrase almost universally attributed to the R7 series of Ryzen CPUs, like the R7 1700 ($270 currently), but with limited data-driven testing to definitively prove the theory. Along with most other folks in the industry, we supported Ryzen as a streamer-oriented platform in our reviews, but we based this assessment on an understanding of Ryzen’s performance in production workloads. Without actual game stream benchmarking, it was always a bit hazy just how the R7 1700 and the i7-7700K ($310 currently) would perform comparatively in game live-streaming.
This new benchmark looks at the AMD R7 1700 vs. Intel i7-7700K performance while streaming, including stream output/framerate, drop frames, streamer-side FPS, power consumption, and some brief thermal data. The goal is to determine not only whether one CPU is better than the other, but whether the difference is large enough to be potentially paradigm-shifting. The article explores all of this, though we’ve also got an embedded video below. If video is your preferred format, consider checking the article conclusion section for some additional thoughts.
Our newest revisit could also be considered our oldest: the Nehalem microarchitecture is nearly ten years old now, having launched in November 2008 after an initial showing at Intel’s 2007 Developer Forum, and we’re back to revive our i7-930 in 2017.
The sample chosen for these tests is another from the GN personal stash, a well-traveled i7-930 originally from Steve’s own computer that saw service in some of our very first case reviews, but has been mostly relegated to the shelf o’ chips since 2013. The 930 was one of the later Nehalem CPUs, released in Q1 2010 for $294, exactly one year ahead of the advent of the still-popular Sandy Bridge architecture. That includes the release of the i7-2600K, which we’ve already revisited in detail.
Sandy Bridge was a huge step for Intel, but Nehalem processors were actually the first generation to be branded with the now-familiar i5 and i7 naming convention (no i3s, though). A couple features make these CPUs worth a look today: Hyperthreading was (re)introduced with i7 chips, meaning that even the oldest generation of i7s has 4C/8T, and overclocking could offer huge leaps in performance often limited by heat and safe voltages rather than software stability or artificial caps.
This week's hardware news recap primarily focuses on industry topics, like new NAND from Toshiba, Western Digital, and a new SSD from Intel (first 64-layer VNAND SSD). A few other topics sneak in, like AMD's Ryzen Pro CPU line, a Vega reminder (in the video), the death of Lexar, and a few gaming peripherals.
Through the weekend, we'll be posting our Zotac 1080 Ti Amp Extreme review, the first part of our AMD Vega: Frontier Edition Hybrid mod, and a special benchmark feature in our highly acclaimed "Revisit" series.
In the meantime, here's the last week of HW news recapped:
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.