At GTC 2018, we learned that SK Hynix’s GDDR6 memory is bound for mass production in 3 months, and will be featured on several upcoming nVidia products. Some of these include autonomous vehicle components, but we also learned that we should expect GDDR6 on most, if not all, of nVidia’s upcoming gaming architecture cards.

Given a mass production timeline of June-July for GDDR6 from SK Hynix, assuming Hynix is a launch-day memory provider, we can expect next-generation GPUs to become available after this timeframe. There still needs to be enough time to mount the memory to the boards, after all. We don’t have a hard date for when the next-generation GPU lineup will ship, but from this information, we can assume it’s at least 3 months away -- possibly more. Basically, what we know is that, assuming Hynix is a launch vendor, new GPUs are nebulously >3 months away.

NVidia today announced what it calls “the world’s largest GPU,” the gold-painted and reflective GV100, undoubtedly a call to its ray-tracing target market. The Quadro GV100 combines 2x V100 GPUs via NVLink2, running 32GB of HBM2 per GPU and 10,240 CUDA cores. NVidia advertises 236 TFLOPS Tensor Cores in addition to the power afforded by the 10,240 CUDA cores.

Additionally, nVidia has upgraded its Tesla V100 products to 32GB, adding to the HBM2 stacks on the interposer. The V100 is nVidia’s accelerator card, primarily meant for scientific and machine learning workloads, and later gave way to the Titan V(olta). The V100 was the first GPU to use nVidia’s Volta architecture, shipping initially at 16GB – just like the Titan V – but with more targeted use cases. NVidia's first big announcement for GTC was to add 16GB VRAM to the V100, further adding a new “NV Switch” (no, not that one) to increase the coupling capabilities of Tesla V100 accelerators. Now, the V100 can be bridged with a 2-billion transistor switch, offering 18 ports to scale-up the GPU count per system.

Analyst Christopher Rolland recently confirmed Bitmain’s completed development of a new ASIC miner for Ethereum (and similar cryptocurrencies), and thusly reduced stock targets for both AMD and NVIDIA. According to Rolland, Bitmain’s ASIC may eat into GPU demand by cryptomining companies, as the ASIC will outperform GPUs in efficiency for the hashing power.

Rolland noted that this may, obviously, reduce demand for GPUs for mining applications, highlighting that an approximate 20% of AMD and 10% of NVIDIA sales revenue has recently come from mining partners.

Multi-core enhancement is an important topic that we’ve discussed before – right after the launch of the 8700K, most recently. It’ll become even more important over the next few weeks, and that’s for a few reasons: For one, Intel is launching its new B and H chipsets soon, and that’ll require some performance testing. For two, AMD is launching its Ryzen 2000 series chips on April 19th, and those will include XFR2. Some X470 motherboards, just like some X370 motherboards, have MCE equivalent options. For Intel and AMD both, enabling MCE means running outside of power specification, and therefore thermal spec of low-end coolers, and also running higher clocks than the stock configuration. The question is if any motherboard vendors enable MCE by default, or silently, because that’s where results can become muddy for buyers.

As noted, this topic is most immediately relevant for impending B & H series chipset testing – if recent leaks are to be believed, anyway. This is also relevant for upcoming Ryzen 2 CPUs, like the 2700X and kin, for their inclusion of XFR2 and similar boosting features. In today’s content, we’re revisiting MCE and Core Performance Boost on AMD CPUs, demonstrating the differences between them (and an issue with BIOS revision F2 on the Ultra Gaming).

Sea of Thieves, the multiplayer-adventure-survival-pirate simulator from Rare, has finally been released after months of betas and stress tests. Judging by the difficulty they’ve had keeping the servers up after all that preparation, it seems like it’s been pretty popular. This comparison looks at Sea of Thieves Xbox One X vs. PC graphics quality, equalized graphics settings, and framerate/frametime performance on the Xbox.

SoT is also one of the first really big multiplayer titles to be added to the “Xbox Play Anywhere Program.” That means that it’s playable on both Xbox One and Windows 10 with a single purchase (yes, it’s a Windows 10 exclusive DX11 game). Also, Xbox and PC players are free to encounter each other ingame or even party up together, with the only obvious downside being forced to interact with the Windows 10 store and Xbox app. Together, these two aspects make a PC vs Xbox a very interesting comparison, since any player that owns a PC and an Xbox could easily switch.

A few days ago, we ran our most successful, highest-watched livestream in the history of GN. The stream peaked at >5300 concurrent viewers for around 2.5 hours, during which time we attempted to outmatch the LinusTechTips 3DMark score submitted to the 3DMark Hall of Fame. This was a friendly media battle that we decided to bring to LTT after seeing their submission, which briefly knocked us out of the Top 10 for the Hall of Fame. As noted in this recap video, we're not skilled enough to compete with the likes of K|NGP|N, dancop, der8auer, or similar pro XOCers, but we can certainly compete with other media. We made a spectacle of the event and pushed our i9-7980XE, our RAM, and our GPU (a Titan V) as far as our components would allow under ambient cooling. Ambient, by the way, peaked at ~30C during the stream; after the stream ended and room ambient dropped ~10C to 20C, our scores improved to 8285 in Timespy Extreme. This pushed us into 4th place on the 3DMark official Hall of Fame, and 3rd place in the HW Bot rankings.

The overclocking stream saw guest visits from Buildzoid of Actually Hardcore Overclocking, who assisted in tuning our memory timings for the final couple of points. We think there's more room to push here, but we'd like to keep some in the tank for a retaliation from Linus and team.

The Thermaltake View 37 is the latest addition to Thermaltake’s big-transparent-window-themed View series. It’s similar in appearance to the older View 27, but with a much larger acrylic window and less internal shrouding.

The acrylic window is impressive, and it’s about the best it can be without using tempered glass. Manufacturing curved glass panels is difficult and expensive, and using glass would probably bring the price closer to $200 (or above, for the RGB version). As it is, the acrylic is thick and well-tooled so it’s basically indistinguishable from glass, other than a tendency to collect dust and small scratches. Acrylic was the right choice to ship with this case, but if Thermaltake sticks to past patterns they may offer a separate glass panel in the future.

Today, we’re reviewing the Thermaltake View 37 enclosure at $110, with some 2x 200mm fan testing for comparison. The RGB version runs at $170.

This hardware news update looks into our original CTS Labs story, adding to the research by attempting to communicate with CTS Labs via their PR firm, Bevel PR. We also talk about leaked specifications for the R5 2600X, accidentally posted early to Amazon, and some other leaks on ASUS ROG X470 motherboards.

Minor news items include the loss of power at a Samsung plant, killing 60,000 wafers in the process, and nVidia’s real-time ray-tracing (RTX) demo from GDC.

Show notes below the video.

If you went through our original H500P review and addressed each complaint one by one, the result would be the H500P Mesh, Cooler Master’s new mesh-fronted variant of the (formerly) underwhelming HAF successor. We previously built our own Cooler Master mesh mod, and the performance results there nearly linearly mirror what we found in Cooler Master’s actual H500P Mesh case.

In our Cooler Master H500P Mesh review, we’ll run through temperature testing (thermals), airflow testing with an anemometer, and noise testing. Additional quality analysis will be done to gauge whether the substantial issues with the original H500P front and top panels have been resolved.

 

As this case is the same barebones chassis as the original H500P, we encourage you to read that review for more detailed notes on the build process. The focus here is on airflow, thermals, noise, and external build quality or other resolutions. As a reminder, the original marketing advertised “guaranteed high-volume airflow,” and suggested to reviewers that the case was a high-airflow enclosure with performance-oriented qualities. That, clearly, was not true, and was what resulted in the lashing the H500P received. Honest marketing matters.

If, to you, the word "unpredictable" sounds like a positive attribute for a graphics card, ASRock has something you may want. ASRock used words like “unpredictable” and “mysterious” for its new Phantom Gaming official trailer, two adjectives used to describe an upcoming series of AMD Radeon-equipped graphics cards. This is ASRock’s first time entering the graphics card space, where the company’s PCB designers will be faced with new challenges for AMD RX Vega GPUs (and future architectures).

The branding is for “Phantom” graphics cards, and the first-teased card appears to be using a somewhat standard dual-axial fan design with a traditional aluminum finstack and ~6mm heatpipes. Single 8-pin header is shown in the rendered teaser card, but as a render, we’re not sure what the actual product will look like.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge