Steve Burke

Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

NVidia today announced what it calls “the world’s largest GPU,” the gold-painted and reflective GV100, undoubtedly a call to its ray-tracing target market. The Quadro GV100 combines 2x V100 GPUs via NVLink2, running 32GB of HBM2 per GPU and 10,240 CUDA cores. NVidia advertises 236 TFLOPS Tensor Cores in addition to the power afforded by the 10,240 CUDA cores.

Additionally, nVidia has upgraded its Tesla V100 products to 32GB, adding to the HBM2 stacks on the interposer. The V100 is nVidia’s accelerator card, primarily meant for scientific and machine learning workloads, and later gave way to the Titan V(olta). The V100 was the first GPU to use nVidia’s Volta architecture, shipping initially at 16GB – just like the Titan V – but with more targeted use cases. NVidia's first big announcement for GTC was to add 16GB VRAM to the V100, further adding a new “NV Switch” (no, not that one) to increase the coupling capabilities of Tesla V100 accelerators. Now, the V100 can be bridged with a 2-billion transistor switch, offering 18 ports to scale-up the GPU count per system.

Analyst Christopher Rolland recently confirmed Bitmain’s completed development of a new ASIC miner for Ethereum (and similar cryptocurrencies), and thusly reduced stock targets for both AMD and NVIDIA. According to Rolland, Bitmain’s ASIC may eat into GPU demand by cryptomining companies, as the ASIC will outperform GPUs in efficiency for the hashing power.

Rolland noted that this may, obviously, reduce demand for GPUs for mining applications, highlighting that an approximate 20% of AMD and 10% of NVIDIA sales revenue has recently come from mining partners.

Multi-core enhancement is an important topic that we’ve discussed before – right after the launch of the 8700K, most recently. It’ll become even more important over the next few weeks, and that’s for a few reasons: For one, Intel is launching its new B and H chipsets soon, and that’ll require some performance testing. For two, AMD is launching its Ryzen 2000 series chips on April 19th, and those will include XFR2. Some X470 motherboards, just like some X370 motherboards, have MCE equivalent options. For Intel and AMD both, enabling MCE means running outside of power specification, and therefore thermal spec of low-end coolers, and also running higher clocks than the stock configuration. The question is if any motherboard vendors enable MCE by default, or silently, because that’s where results can become muddy for buyers.

As noted, this topic is most immediately relevant for impending B & H series chipset testing – if recent leaks are to be believed, anyway. This is also relevant for upcoming Ryzen 2 CPUs, like the 2700X and kin, for their inclusion of XFR2 and similar boosting features. In today’s content, we’re revisiting MCE and Core Performance Boost on AMD CPUs, demonstrating the differences between them (and an issue with BIOS revision F2 on the Ultra Gaming).

A few days ago, we ran our most successful, highest-watched livestream in the history of GN. The stream peaked at >5300 concurrent viewers for around 2.5 hours, during which time we attempted to outmatch the LinusTechTips 3DMark score submitted to the 3DMark Hall of Fame. This was a friendly media battle that we decided to bring to LTT after seeing their submission, which briefly knocked us out of the Top 10 for the Hall of Fame. As noted in this recap video, we're not skilled enough to compete with the likes of K|NGP|N, dancop, der8auer, or similar pro XOCers, but we can certainly compete with other media. We made a spectacle of the event and pushed our i9-7980XE, our RAM, and our GPU (a Titan V) as far as our components would allow under ambient cooling. Ambient, by the way, peaked at ~30C during the stream; after the stream ended and room ambient dropped ~10C to 20C, our scores improved to 8285 in Timespy Extreme. This pushed us into 4th place on the 3DMark official Hall of Fame, and 3rd place in the HW Bot rankings.

The overclocking stream saw guest visits from Buildzoid of Actually Hardcore Overclocking, who assisted in tuning our memory timings for the final couple of points. We think there's more room to push here, but we'd like to keep some in the tank for a retaliation from Linus and team.

If, to you, the word "unpredictable" sounds like a positive attribute for a graphics card, ASRock has something you may want. ASRock used words like “unpredictable” and “mysterious” for its new Phantom Gaming official trailer, two adjectives used to describe an upcoming series of AMD Radeon-equipped graphics cards. This is ASRock’s first time entering the graphics card space, where the company’s PCB designers will be faced with new challenges for AMD RX Vega GPUs (and future architectures).

The branding is for “Phantom” graphics cards, and the first-teased card appears to be using a somewhat standard dual-axial fan design with a traditional aluminum finstack and ~6mm heatpipes. Single 8-pin header is shown in the rendered teaser card, but as a render, we’re not sure what the actual product will look like.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge