Revealed to press under embargo at last week’s GTC, the nVidia-hosted GPU Technology Conference, nVidia CEO Jensen Huang showcased the new TITAN W graphics card. The Titan W is nVidia’s first dual-GPU card in many years, and comes after the compute-focused Titan V GPU from 2017.

The nVidia Titan W graphics card hosts two V100 GPUs and 32GB of HBM2 memory, claiming a TDP of 500W and a price of $8,000.

“I’m really just proving to shareholders that I’m healthy,” Huang laughed after his fifth consecutive hour of talking about machine learning. “I could do this all day – and I will,” the CEO said, with a nod to PR, who immediately locked the doors to the room.

Elgato’s 4K60 Pro capture card is an internal PCIe x4 capture card capable of handling resolutions up to 3840x2160 at 60 frames per second, as the name implies. It launched in November with an MSRP of $400, and has remained around that price since.

The Amazon reviews for the 4K60 Pro are almost worthless, because Amazon considers the 4K60 Pro and Elgato’s 1080p-capable HD60 Pro to be varieties of the same product and groups their reviews together. There are only twenty-something reviews of the 4K60 compared to nearly two thousand for the HD60, so that may skew the results slightly. Of the three single-star reviews that are actually for the 4K60, one is from a gentleman who was expecting a seven-inch-long PCIe card to work in a laptop. As of this writing, nobody at all has reviewed it on Newegg, and it’s on sale for $12 off in both locations.

It doesn’t seem like these are flying off the shelves, which probably speaks more to the current demand for 4K 60FPS streaming than the product itself--it’s the cheapest of a very small number of 4K60-capable capture cards, and there’s not any consumer-level competition to speak of. $400 may seem like a lot, but the existing alternatives are much more expensive, like the Magewell Pro Capture HDMI 4K Plus, which (besides having an awful name) costs around $800-$900. The Magewell does have a heatsink and a fan, though, which the 4K60 Pro does not--more on that later.

This Elgato 4K60 Pro review looks at the capture card’s quality and capabilities for both console and PC capture, and also walks through some thermal and temperature measurements taken with thermocouples.

At GTC 2018, we learned that SK Hynix’s GDDR6 memory is bound for mass production in 3 months, and will be featured on several upcoming nVidia products. Some of these include autonomous vehicle components, but we also learned that we should expect GDDR6 on most, if not all, of nVidia’s upcoming gaming architecture cards.

Given a mass production timeline of June-July for GDDR6 from SK Hynix, assuming Hynix is a launch-day memory provider, we can expect next-generation GPUs to become available after this timeframe. There still needs to be enough time to mount the memory to the boards, after all. We don’t have a hard date for when the next-generation GPU lineup will ship, but from this information, we can assume it’s at least 3 months away -- possibly more. Basically, what we know is that, assuming Hynix is a launch vendor, new GPUs are nebulously >3 months away.

NVidia today announced what it calls “the world’s largest GPU,” the gold-painted and reflective GV100, undoubtedly a call to its ray-tracing target market. The Quadro GV100 combines 2x V100 GPUs via NVLink2, running 32GB of HBM2 per GPU and 10,240 CUDA cores. NVidia advertises 236 TFLOPS Tensor Cores in addition to the power afforded by the 10,240 CUDA cores.

Additionally, nVidia has upgraded its Tesla V100 products to 32GB, adding to the HBM2 stacks on the interposer. The V100 is nVidia’s accelerator card, primarily meant for scientific and machine learning workloads, and later gave way to the Titan V(olta). The V100 was the first GPU to use nVidia’s Volta architecture, shipping initially at 16GB – just like the Titan V – but with more targeted use cases. NVidia's first big announcement for GTC was to add 16GB VRAM to the V100, further adding a new “NV Switch” (no, not that one) to increase the coupling capabilities of Tesla V100 accelerators. Now, the V100 can be bridged with a 2-billion transistor switch, offering 18 ports to scale-up the GPU count per system.

Analyst Christopher Rolland recently confirmed Bitmain’s completed development of a new ASIC miner for Ethereum (and similar cryptocurrencies), and thusly reduced stock targets for both AMD and NVIDIA. According to Rolland, Bitmain’s ASIC may eat into GPU demand by cryptomining companies, as the ASIC will outperform GPUs in efficiency for the hashing power.

Rolland noted that this may, obviously, reduce demand for GPUs for mining applications, highlighting that an approximate 20% of AMD and 10% of NVIDIA sales revenue has recently come from mining partners.

Multi-core enhancement is an important topic that we’ve discussed before – right after the launch of the 8700K, most recently. It’ll become even more important over the next few weeks, and that’s for a few reasons: For one, Intel is launching its new B and H chipsets soon, and that’ll require some performance testing. For two, AMD is launching its Ryzen 2000 series chips on April 19th, and those will include XFR2. Some X470 motherboards, just like some X370 motherboards, have MCE equivalent options. For Intel and AMD both, enabling MCE means running outside of power specification, and therefore thermal spec of low-end coolers, and also running higher clocks than the stock configuration. The question is if any motherboard vendors enable MCE by default, or silently, because that’s where results can become muddy for buyers.

As noted, this topic is most immediately relevant for impending B & H series chipset testing – if recent leaks are to be believed, anyway. This is also relevant for upcoming Ryzen 2 CPUs, like the 2700X and kin, for their inclusion of XFR2 and similar boosting features. In today’s content, we’re revisiting MCE and Core Performance Boost on AMD CPUs, demonstrating the differences between them (and an issue with BIOS revision F2 on the Ultra Gaming).

Sea of Thieves, the multiplayer-adventure-survival-pirate simulator from Rare, has finally been released after months of betas and stress tests. Judging by the difficulty they’ve had keeping the servers up after all that preparation, it seems like it’s been pretty popular. This comparison looks at Sea of Thieves Xbox One X vs. PC graphics quality, equalized graphics settings, and framerate/frametime performance on the Xbox.

SoT is also one of the first really big multiplayer titles to be added to the “Xbox Play Anywhere Program.” That means that it’s playable on both Xbox One and Windows 10 with a single purchase (yes, it’s a Windows 10 exclusive DX11 game). Also, Xbox and PC players are free to encounter each other ingame or even party up together, with the only obvious downside being forced to interact with the Windows 10 store and Xbox app. Together, these two aspects make a PC vs Xbox a very interesting comparison, since any player that owns a PC and an Xbox could easily switch.

A few days ago, we ran our most successful, highest-watched livestream in the history of GN. The stream peaked at >5300 concurrent viewers for around 2.5 hours, during which time we attempted to outmatch the LinusTechTips 3DMark score submitted to the 3DMark Hall of Fame. This was a friendly media battle that we decided to bring to LTT after seeing their submission, which briefly knocked us out of the Top 10 for the Hall of Fame. As noted in this recap video, we're not skilled enough to compete with the likes of K|NGP|N, dancop, der8auer, or similar pro XOCers, but we can certainly compete with other media. We made a spectacle of the event and pushed our i9-7980XE, our RAM, and our GPU (a Titan V) as far as our components would allow under ambient cooling. Ambient, by the way, peaked at ~30C during the stream; after the stream ended and room ambient dropped ~10C to 20C, our scores improved to 8285 in Timespy Extreme. This pushed us into 4th place on the 3DMark official Hall of Fame, and 3rd place in the HW Bot rankings.

The overclocking stream saw guest visits from Buildzoid of Actually Hardcore Overclocking, who assisted in tuning our memory timings for the final couple of points. We think there's more room to push here, but we'd like to keep some in the tank for a retaliation from Linus and team.

The Thermaltake View 37 is the latest addition to Thermaltake’s big-transparent-window-themed View series. It’s similar in appearance to the older View 27, but with a much larger acrylic window and less internal shrouding.

The acrylic window is impressive, and it’s about the best it can be without using tempered glass. Manufacturing curved glass panels is difficult and expensive, and using glass would probably bring the price closer to $200 (or above, for the RGB version). As it is, the acrylic is thick and well-tooled so it’s basically indistinguishable from glass, other than a tendency to collect dust and small scratches. Acrylic was the right choice to ship with this case, but if Thermaltake sticks to past patterns they may offer a separate glass panel in the future.

Today, we’re reviewing the Thermaltake View 37 enclosure at $110, with some 2x 200mm fan testing for comparison. The RGB version runs at $170.

This hardware news update looks into our original CTS Labs story, adding to the research by attempting to communicate with CTS Labs via their PR firm, Bevel PR. We also talk about leaked specifications for the R5 2600X, accidentally posted early to Amazon, and some other leaks on ASUS ROG X470 motherboards.

Minor news items include the loss of power at a Samsung plant, killing 60,000 wafers in the process, and nVidia’s real-time ray-tracing (RTX) demo from GDC.

Show notes below the video.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.


  VigLink badge