Our 7900X delidding benchmarks weren’t published by coincidence: Today, we’re expanding on our liquid metal vs. Intel TIM testing with the new Intel i9-7960X and i9-7980XE CPUs, the 16C and 18C Skylake-X parts, respectively. These CPUs are Intel’s highest multithreaded performers in this segment, and are priced alongside that status – the 7960X costs $1700, with the 7980XE at $2000.

Rather than focusing entirely on delidding and thermal benchmarks, we’ll also be including power testing and some production benchmarks (Blender, Premiere). This review of the Intel i9-7960X and i9-7980XE will primarily test thermals, power, delidded thermals, liquid metal thermals, rendering benchmarks, and some synthetics.

Recapping the previous test approach for delidding & liquid metal:

Today's video showed some of the process of delidding the i9-7900X -- again, following our Computex delid -- and learning how to use liquid metal. It's a first step, and one that we can learn from. The process has already been applied toward dozens of benchmarks, the charts for which are in the creation stage right now. We'll be working on the 7900X thermal and power content over the weekend, leading to a much greater content piece thereafter. It'll all be focused on thermals and power.

As for the 7900X, the delid was fairly straight forward: We used Der8auer's same Delid DieMate tool that we used at Computex, but now with updated hardware. A few notes on this: After the first delid, we learned that the "clamp" (pressing vertically) is meant to reseal and hold the IHS + substrate still. It is not needed for the actual delid process, so that's one of the newly learned aspects of this. The biggest point of education was the liquid metal application process, as LM gets everywhere and spreads sufficiently without anything close to the size of 'blob' you'd use for TIM.

While traveling, the major story that unfolded – and then folded – pertained to the alleged unlocking of Vega 56 shaders, permitting the cards to turn into a “Vega 58” or “Vega 57,” depending. This ultimately was due to a GPU-Z reporting bug, and users claiming increases in performance hadn’t normalized for the clock change or higher power budget. Still, the BIOS flash will modify the DPM tables to adjust for higher clocks and permit greater HBM2 voltage to the memory. Of these changes, the latter is the only real, relevant change – clocks can be manually increased on V56, and the core voltage remains the same after a flash. Powerplay tables can be used to bypass BIOS power limits on V56, though a flash to V64 BIOS permits higher power budget.

Even with all this, it’s still impossible (presently) to flash a modified, custom BIOS onto Vega. We tried this upon review of Vega 56, finding that the card was locked-down to prevent modding. This uses an on-die security coprocessor, relegating our efforts to powerplay tables. Those powerplay tables did ultimately prove successful, as we recently published.

Everyone talks game about how they don’t care about power consumption. We took that comment to the extreme, using a registry hack to give Vega 56 enough extra power to kill the card, if we wanted, and a Floe 360mm CLC to keep temperatures low enough that GPU diode reporting inaccuracies emerge. “I don’t care about power consumption, I just want performance” is now met with that – 100% more power and an overclock to 1742MHz core. We've got room to do 200% power, but things would start popping at that point. The Vega 56 Hybrid mod is our most modded version of the Hybrid series to date, and leverages powerplay table registry changes to provide that additional power headroom. This is an alternative to BIOS flashing, which is limited to signed drivers (like V64 on V56, though we had issues flashing V64L onto V56). Last we attempted it, a modified BIOS did not work. Powerplay tables do, though, and mean that we can modify power target to surpass V56’s artificial power limitation.

The limitation on power provisioned to the V56 core is, we believe, fully to prevent V56 from too easily outmatching V64 in performance. The card’s BIOS won’t allow greater than 300-308W down the PCIe cables natively, even though official BIOS versions for V64 cards can support 350~360W. The VRM itself easily sustains 360W, and we’ve tested it as handling 406W without a FET popping. 400W is probably pushing what’s reasonable, but to limit V56 to ~300W, when an additional 60W is fully within the capabilities of the VRM & GPU, is a means to cap V56 performance to a point of not competing with V64.

We fixed that.

AMD’s CU scaling has never been that impacting to performance – clock speed closes most gaps with AMD hardware. Even without the extra shaders of V64, we can outperform V64’s stock performance, and we’ll soon find out how we do versus V64’s overclocked performance. That’ll have to wait until after PAX, but it’s something we’re hoping to further study.

This is just a quick PSA.

We shot an off-the-cuff video about software misreporting Vega’s frequency, to the extent that a “1980MHz overclock” is possible under the misreported conditions. The entire point of the video was to bring awareness to a bug in either software or drivers – not to point blame at AMD – explicitly to ensure consumers understand that the numbers may be inaccurate. Some reviews even cited overclocks of “1980MHz,” but overlooked the fact that scaling ceases around the threshold where the reporting bugs out.

When interviewing EVGA Extreme OC Engineer “Kingpin,” the term “dailies” came up – as in daily users, or “just gamers,” or generally people who don’t use LN2 to overclock their GPU. The GTX 1080 Ti Kingpin card is not a device built for “dailies,” but rather for extreme overclockers – people who are trying to break world records.

Cards like this – the Lightning would be included – do have a reason to exist. Criticism online sometimes calls such devices “pointless” for delivering the same overall out-of-box experience as nearly any other 1080 Ti, but those criticizing aren’t looking at it from the right perspective. A Kingpin, Lightning, or other XOC card is purchased to eliminate the need to perform hard mods to get a card up to speed. It’s usable out of the box as an XOC tool.

Professional overclocker Toppc recently set another world record for DDR4 SDRAM frequency. Using a set of G.SKILL DDR4 sticks (an unidentified kit from the Trident Z RGB line) bestriding an MSI X299 Gaming Pro Carbon AC motherboard, Toppc was able to achieve a 5.5 GHz DDR4 frequency—approximately a 500 MHz improvement over his record from last year.

Toppc’s new record is verified by HWBot, accompanied by a screenshot of CPU-Z and Toppc’s extreme cooling setup, which involved LN2. Although an exact temperature was not provided, and details on the aforementioned G.SKILL kit are scant, we do know that the modules used Samsung 8GB ICs. Based on the limited information, we can infer or postulate that this is probably a new product from G.SKILL, as they announced new memory kits at Computex.

Gigabyte recently sponsored an extreme overclocking event throughout Computex, where their resident overclockers HiCookie and Sofos teamed with TeamAU’s Dinos22, Youngpro, and SniperOZ. The teams worked to overclock the Intel i7-7740X KBL-X CPU on the new X299 platform.

Gigabyte’s team was able to hit the 7.5GHz mark with the i7-7740X, with the help of LHe (Liquid Helium) – allegedly $20,000 worth. To give some perspective, when we spoke off-camera with Der8auer at the GSkill booth, we learned that LHe costs him about $4.4 per second in his region. With the use of LHe, the team of overclockers were able to drop temperatures to -250° Celsius. Opposed to LN2, LHe has a boiling point of around -269° Celsius, meaning it can take temperatures far lower than LN2.

With the employed LHe, Gigabyte was able to set 4 launch day records in 3DMark03, 3DMark06, and Aquamark. All scores were achieved using the Intel i7-7740X and the Gigabyte X299-SOC Champion motherboard. Memory and GPUs diverge a bit for different benchmarks, as can be seen below.

Following our recent delidding of the Intel i9-7900X, we received a few questions asking for the die size and CPU size of the new 10C/20T Intel CPU. We decided to return to the GSkill booth, where overclocker Der8auer helped us delid the CPU, to take some measurements. The original delidding video is here.

On to the sizes: This was measured with a media gift ruler on a show floor, so it’s accurate enough. Millimeters are millimeters.

MSI’s flagship GTX 1080 Ti Lightning GPU made an appearance at the company’s Computex booth this year, where we were able to get hands-on with the card and speak with PMs about VRM and cooling solutions. The 1080 Ti Lightning is an OC-targeted card, as indicated by its LN2 BIOS switch, and will compete with other current flagships (like the Kingpin that we just covered). The Lightning does not yet have a price, but we know the core details about cooling and power.

Starting with cooling: MSI’s 1080 Ti Lightning uses a finned baseplate (think “pin fins” from ICX) to provide additional surface area for dissipation of VRM/VRAM component heat. This baseplate covers the usual areas of the board, but is accompanied by a blackout copper heatpipe over the MOSFETs & driver IC components for heat sinking of power modules. We’ve seen this design get more spread lately, and have found it to be effective for cooling VRM devices. The heatpipe is cooled by the Lightning’s 3-fan solution, as is the rest of the thick finstack above the custom PCB.

Page 1 of 7

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge