Today's video showed some of the process of delidding the i9-7900X -- again, following our Computex delid -- and learning how to use liquid metal. It's a first step, and one that we can learn from. The process has already been applied toward dozens of benchmarks, the charts for which are in the creation stage right now. We'll be working on the 7900X thermal and power content over the weekend, leading to a much greater content piece thereafter. It'll all be focused on thermals and power.

As for the 7900X, the delid was fairly straight forward: We used Der8auer's same Delid DieMate tool that we used at Computex, but now with updated hardware. A few notes on this: After the first delid, we learned that the "clamp" (pressing vertically) is meant to reseal and hold the IHS + substrate still. It is not needed for the actual delid process, so that's one of the newly learned aspects of this. The biggest point of education was the liquid metal application process, as LM gets everywhere and spreads sufficiently without anything close to the size of 'blob' you'd use for TIM.

While traveling, the major story that unfolded – and then folded – pertained to the alleged unlocking of Vega 56 shaders, permitting the cards to turn into a “Vega 58” or “Vega 57,” depending. This ultimately was due to a GPU-Z reporting bug, and users claiming increases in performance hadn’t normalized for the clock change or higher power budget. Still, the BIOS flash will modify the DPM tables to adjust for higher clocks and permit greater HBM2 voltage to the memory. Of these changes, the latter is the only real, relevant change – clocks can be manually increased on V56, and the core voltage remains the same after a flash. Powerplay tables can be used to bypass BIOS power limits on V56, though a flash to V64 BIOS permits higher power budget.

Even with all this, it’s still impossible (presently) to flash a modified, custom BIOS onto Vega. We tried this upon review of Vega 56, finding that the card was locked-down to prevent modding. This uses an on-die security coprocessor, relegating our efforts to powerplay tables. Those powerplay tables did ultimately prove successful, as we recently published.

Product photos and renders for ASRock’s alleged Coffee Lake Z370 motherboards have leaked through Videocardz, detailing the ASRock lineup from top-to-bottom. The reported offering from ASRock includes a Z370 “Killer” motherboard (bearing similar branding to Fatal1ty boards), the Z370 Taichi high-end board, Z370M Pro4 Micro-ATX board, Z370M-ITX AC wireless board, and lower-end Z370 Extreme4 and Pro4 motherboards (both ATX).

Our Destiny 2 GPU benchmark was conducted alongside our CPU benchmark, using many of the same learnings from our research for the GPU bench. For GPU testing, we found Destiny 2 to be remarkably consistent between multiplayer and campaign performance, scaling all the way down to a 1050 Ti. This remained true across the campaign, which performed largely identically across all levels, aside from a single level with high geometric complexity and heavy combat. We’ll recap some of that below.

For CPU benchmarking, GN’s Patrick Lathan used this research (starting one hour after the GPU bench began) to begin CPU tests. We ultimately found more test variance between CPUs – particularly at the low-end – when switching between campaign and multiplayer, and so much of this content piece will be dedicated to the research portion behind our Destiny 2 CPU testing. We cannot yet publish this as a definitive “X vs. Y CPU” benchmark, as we don’t have full confidence in the comparative data given Destiny 2’s sometimes nebulous behaviors.

For one instance, Destiny 2 doesn’t utilize SMT with Ryzen, producing utilization charts like this:

Since AMD’s high-core-count Ryzen lineup has entered the market, there seems to be an argument in every comment thread about multitasking and which CPUs handle it better. Our clean, controlled benchmarks don’t account for the demands of eighty browser tabs and Spotify running, and so we get constant requests to do in-depth testing on the subject. The general belief is that more threads are better able to handle more processes, a hypothesis that would increasingly favor AMD.

There are a couple reasons we haven’t included tests like these all along: first, “multitasking” means something completely different to every individual, and second, adding uncontrolled variables (like bloatware and network-attached software) makes tests less scientific. Originally, we hoped this article would reveal any hidden advantages that might emerge between CPUs when adding “multitasking” to the mix, but it’s ended up as a thorough explanation of why we don’t do benchmarks like this. We’re using the R3 1200 and G4560 to primarily run these trials.

This is the kind of testing we do behind-the-scenes to build a new test plan, but often don’t publish. This time, however, we’re publishing the trials of finding a multitasking benchmark that works. The point of publishing the trials is to demonstrate why it’s hard to trust “multitasking” tests, and why it’s hard to conduct them in a manner that’s representative of actual differences.

In listening to our community, we’ve learned that a lot of people seem to think Discord is multitasking, or that a Skype window is multitasking. Here’s the thing: If you’re running Discord and a game and you’re seeing an impact to “smoothness,” there’s something seriously wrong with the environment. That’s not even remotely close to enough of a workload to trouble even a G4560. We’re not looking at such a lightweight workload here, and we’re also not looking at the “I keep 100 tabs of Chrome open” scenarios, as that’s wholly unreliable given Chrome’s unpredictable caching and behaviors. What we are looking at is 4K video playback while gaming and bloatware while gaming.

In this piece, the word “multitasking” will be used to describe “running background software while gaming.” The term "bloatware" is being used loosely to easily describe an unclean operating system with several user applications running in the background.

Creative Assembly has been busy with the Total War: Warhammer franchise lately. The second game of the planned trilogy is coming on September 28th, and in preparation a host of updates and bugfixes have been added to the original, as well as the new Norsca DLC faction.

One part of these updates was quietly replacing the default benchmark packaged with the game, which we’ve regularly included in our current cycle of CPU reviews. It was a short snippet of a battle between greenskin and Imperial armies, shot mostly from above, with some missile trails and artillery thrown in. Its advantages were that it was fairly CPU intensive, from a modern game that people are still interested in, and extremely easy to run (as it is automated).

Before Vega buried Threadripper, we noted interest in conducting a simple A/B comparison between Noctua’s new TR4-sized coldplate (the full-coverage plate) and their older LGA115X-sized coldplate. Clearly, the LGA115X cooler isn’t meant to be used with Threadripper – but it offered a unique opportunity, as the two units are largely the same aside from coldplate coverage. This grants an easy means to run an A/B comparison; although we can’t draw conclusions to all coldplates and coolers, we can at least see what Noctua’s efforts did for them on the Threadripper front.

Noctua’s NH-U14S cooler possesses the same heatpipe count and arrangement, the same (or remarkably similar) fin stack, and the same fan – though we controlled for that by using the same fan for each unit. The only difference is the coldplate, as far as we can tell, and so we’re able to more easily measure performance deltas resultant primarily from the coldplate coverage change. Noctua’s LGA115X version, clearly not for TR4, wouldn’t cover the entire die area of even one module under the HIS. The smaller plate maximally covers about 30% of the die area, just eyeballing it, and doesn’t make direct contact to the rest. This is less coverage than the Asetek CLCs, which at least make contact with the entire TR4 die area, if not the entire IHS. Noctua modified their unit to equip a full-coverage plate as a response, including the unique mounting hardware that TR4 needs.

The LGA115X NH-U14S doesn’t natively mount to Threadripper motherboards. We modded the NH-U14S TR4 cooler’s mounting hardware with a couple of holes, aligning those with the LGA115X holes, then routed screws and nuts through those. A rubber bumper was placed between the mounting hardware and the base of the cooler, used to help ensure even and adequate mounting pressure. We show a short clip of the modding process in our above video.

Computers have come a long way since their inception. Some of the first computers (built by the military) used electromagnets to calculate torpedo trajectories. Since then, computers have become almost incomprehensibly more powerful and accessible to the point at which the concept of virtual reality headsets aren’t even science fiction.

In gaming PCs, these power increases have often been used to ensure higher FPS, faster game mechanics, and more immersive graphics settings. Despite this, the computational power in modern PCs can be used for a variety of applications. Many uses such as design, communication, servers, etc. are well known, but one lesser known use is contributing to distributed computation programs such as BOINC and Folding@Home.

BOINC (Berkeley Open Infrastructure for Network Computing) and Folding@home (also sometimes referred to as FAH and F@H) are research programs that utilize distributed computing to provide researchers large amounts of computational power without the need of supercomputers. BOINC allows for users to support a variety of programs (including searching for extraterrestrial life, simulating molecular simulations, predicting the climate, etc.). In contrast, Folding@home is run by Stanford and is a singular program that simulates protein folding.

First we’ll discuss what distributed computing is (and its relation to traditional supercomputers), then we’ll cover some noteable projects we’re fond of.

Visiting AMD during the Threadripper announcement event gave us access to a live LN2-overclocking demonstration, where one of the early Threadripper CPUs hit 5.2GHz on LN2 and scored north of 4000 points in Cinebench. Overclocking was performed on two systems, one using an internal engineering sample motherboard and the other using an early ASRock board. LN2 pots will be made available by Der8auer and KINGPIN, though the LN2 pots used by AMD were custom-made for the task, given that the socket is completely new.

The launch of Threadripper marks a move closer to AMD’s starting point for the Zen architecture. Contrary to popular belief, AMD did not start its plans with desktop Ryzen and then glue modules together until Epyc was created; no, instead, the company started with an MCM CPU more similar to Epyc, then worked its way down to Ryzen desktop CPUs. Threadripper is the fruition of this MCM design on the HEDT side, and benefits from months of maturation for both the platform and AMD’s support teams. Ryzen was rushed in its weeks leading to launch, which showed in both communication clarity and platform support in the early days. Finally, as things smoothed-over and AMD resolved many of its communication and platform issues, Threadripper became advantaged in its receipt of these improvements.

“Everything we learned with AM4 went into Threadripper,” one of AMD’s representatives told us, and that became clear as we continued to work on the platform. During the test process for Threadripper, work felt considerably more streamlined and remarkably free of the validation issues that had once plagued Ryzen. The fact that we were able to instantly boot to 3200MHz (and 3600MHz) memory gave hope that Threadripper would, in fact, be the benefactor of Ryzen’s learning pains.

Threadripper will ship in three immediate SKUs:

Respectively, these units are targeted at price-points of $1000, $800, and $550, making them direct competitors to Intel’s new Skylake-X family of CPUs. The i9-7900X would be the flagship – for now, anyway – that’s being more heavily challenged by AMD’s Threadripper HEDT CPUs. Today's review looks at the AMD Threadripper 1950X and 1920X CPUs in livestreaming benchmarks, Blender, Premiere, power consumption, temperatures, gaming, and more.

Page 1 of 15

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge