Intel’s Hades Canyon NUC is well-named: It’s either a reference to hell freezing over, as AMD and Intel worked together on a product, or a reference to the combined heat of Vega and an i7 in a box that’s 8.5” x 5.5” in size. Our review of Hades Canyon looks at overclocking potential, preempting something bigger, and benchmarks the combined i7 CPU and Vega M GPU for gaming and production performance. We’re also looking at thermal performance and noise, as usual. As a unit, it’s one of the smallest, most-powerful systems on the consumer market get right now. We’ll see if it’s worth it.
There are two primary SKUs for the Intel NUC on Newegg, both coming out on April 30th. The unit which most closely resembles ours is $1000, and includes the Intel i7-8809G with 8MB of cache and a limited-core Turbo up to 4.2GHz. The CPU is unlocked for overclocking. It’s coupled with an AMD Vega M GH GPU with 4GB of high-bandwidth memory, also overclockable, but does not include memory or an SSD. You’re on your own for those, as it’s effectively a barebones kit. If you buy straight from Intel’s SimplyNUC website, the NUC8i7HVK that we reviewed comes fully-configured for $1200, including 8GB of DDR4 and a 128GB SSD with Windows 10. Not unreasonable, really.
The past week of hardware news primarily centers around nVidia and AMD, both of whom are launching new GPUs under similar names to existing lines. This struck a chord with us, because the new GT 1030 silently launched by nVidia follows the exact same patterns AMD has taken with its rebranded RX 460s as “RX 560s,” despite having significant hardware changes underneath.
To be very clear, we strongly disagree with creating a new, worse product under the same product name and badging as previously. It is entirely irrelevant how close that product is in performance to the original - it’s not the same product, and that’s all that matters. It deserves a different name.
We spend most of the news video ranting about GPU naming by both companies, but also include a couple of other industry topics. Find the show notes below, or check the video for the more detailed story.
Find the show notes below, or watch the video:
Intel has slowly been deploying mitigations for Spectre/Meltdown for recent platforms. In the most recent microcode revision guidance, Intel has indicated it will not deploy any microcode mitigations for the recently disclosed flaws for older processor platforms. Intel cited the following reasons:
Recent advancements in graphics processing technology have permitted software and hardware vendors to collaborate on real-time ray tracing, a long-standing “holy grail” of computer graphics. Ray-tracing has been used for a couple of decades now, but has always been used in pre-rendered graphics – often in movies or other video playback that doesn’t require on-the-fly processing. The difference with going real-time is that we’re dealing with sparse data, and making fewer rays look good (better than standard rasterization, especially) is difficult.
NVidia has been beating this drum for a few years now. We covered nVidia’s ray-tracing keynote at ECGC a few years ago, when the company’s Tony Tamasi projected 2015 as the year for real-time ray-tracing. That obviously didn’t fully realize, but the company wasn’t too far off. Volta ended up providing some additional leverage to make 60FPS, real-time ray-tracing a reality. Even still, we’re not quite there with consumer hardware. Epic Games and nVidia have been demonstrating real-time ray-tracing rendering with four Titan V100 GPUs lately, functionally $12,000 worth of Titan Vs, and that’s to achieve a playable real-time framerate with the ubiquitous “Star Wars” demo.
Ask GN 75 is an excellent episode. We had great questions for this one, including discussion on X370 vs. X470 benchmarking for Ryzen 2000 series CPUs (e.g. R7 2700X, R5 2600X), which we’ll get in to more detail with in the near future. As noted in the episode, we’re technically not under embargo for the Ryzen 2 CPUs, but we’re planning to hold our review until embargo lift out of respect for AMD’s decision to stop giving special treatment to some media, for this round. That said, we still talk a bit about X370 vs. X470 benchmarking in the Ask GN episode.
The other excellent topic pertained to receiving review samples and balancing hardware criticism – basically behind-the-scenes politics. Find the episode below:
As we remarked back when we reviewed the i5-8400, launched on its lonesome and without low-end motherboard support, the Intel i5-8400 makes most sense when paired with B360 or H370 motherboards. Intel launched the i5-8400 and other non-K CPUs without that low-end chipset support, though, leaving only the Z370 enthusiast board on the frontlines with the locked CPUs.
When it comes to Intel chipset differences, the main point of comparison between B, H, and Z chipsets would be HSIO lanes – or high-speed I/O lanes. HSIO lanes are Intel-assigned per chipset, with each chipset receiving a different count of HSIO lanes. High-speed IO lanes can be assigned somewhat freely by the motherboard manufacturer, and are isolated from the graphics PCIe lanes that each CPU independently possesses. The HSIO lanes are as detailed below for the new 8th Generation Coffee Lake chipsets:
Intel has migrated its new i9 processor family to the portable market, or semi-portable, anyway. The i9-8950HK is an unlocked, overclockable 6C/12T laptop CPU, capable of turbo boosting to 4.8GHz when power and thermal budget permits.
The new i9-8950HK runs at a base clock of 2.9GHz, with single-core turbo boosting to 4.8GHz. We are unclear on what the all-core turbo boost max frequency is, but it’s clearly lower. TDP is rated at 45W, though note that this is more a measure of the cooling requirements and not the actual power consumption; that said, Intel’s TDP ratings almost always explicitly coincide with power consumption numbers. The 8950HK will move away from the quad-channel support of its desktop brethren, and instead move down to dual-channel memory at a base frequency of 2666MHz. Unlocked motherboards should theoretically permit higher memory speeds, though we are uncertain of the market options at this time.
Corsair’s H115i Pro launched alongside the H150i Pro, the first two closed-loop liquid coolers to use the Asetek 6th-Gen pump. As we said in the H150i Pro review, Asetek didn’t do Corsair any favors, here – the new pump isn’t much different from the old one, and primarily focuses on RGB implementations akin to NZXT’s custom work on the XX2 series. Regardless, Corsair has taken this and used it as an opportunity to bundle their new CLCs with silence-focused fans, the ML120 Pro fans.
As shown in our tear-down of the 6th Gen Asetek pump, where we took apart the H150i Pro, the primary changes of the pump are endurance-focused, not performance-focused. Asetek is ultimately the supplier, here, and that means Corsair’s main contributions are restricted to fan choice; that said, Corsair did dictate large parts of the 6th Generation design. Asetek now includes an RGB LED kit for manufacturers, and also includes the PCB for programmable LEDs (something that NZXT previously went through great effort to customize on the 5th generation). The 6th Gen Corsair coldplate is also marginally smaller than the fifth generation, but other than that, it’s all endurance-driven. Asetek has changed its impeller to a metal option, similar to the old Dynatron impellers in the Antec 1250 Kuhler series. Asetek has also reportedly “optimized” their liquid paths to reduce hotspots that caused higher permeation than desired in older generations.
In terms of performance, though, our extensive testing results (and our contacts) all indicate that the 6th Generation is not an improvement in cooling. At best, they’re the same. And that’s at best.
This week's hardware news recap follows GTC 2018, where we had a host of nVidia and GPU-adjacent news to discuss. That's all recapped heavily in the video portion, as most of it was off-the-top reporting just after the show ended. For the rest, we talk 4K 144Hz displays, 3DMark's raytracing demo, AMD's Radeon Rays, the RX Vega 56 Red Devil card, and CTS Labs updates.
As for this week, we're back to lots of CPU testing, as we've been doing for the past few weeks now. We're also working on some secret projects that we'll more fully reveal soon. For the immediate future, we'll be at PAX East on Friday, April 6, and will be on a discussion panel with Bitwit Kyle and Corsair representatives. We're planning to record the panel for online viewing.
Revealed to press under embargo at last week’s GTC, the nVidia-hosted GPU Technology Conference, nVidia CEO Jensen Huang showcased the new TITAN W graphics card. The Titan W is nVidia’s first dual-GPU card in many years, and comes after the compute-focused Titan V GPU from 2017.
The nVidia Titan W graphics card hosts two V100 GPUs and 32GB of HBM2 memory, claiming a TDP of 500W and a price of $8,000.
“I’m really just proving to shareholders that I’m healthy,” Huang laughed after his fifth consecutive hour of talking about machine learning. “I could do this all day – and I will,” the CEO said, with a nod to PR, who immediately locked the doors to the room.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.