Our AMD Radeon VII review is one of our most in-depth in a while. The new $700 AMD flagship is a repurposed Instinct card, down-costed for gaming and some productivity tasks and positioned to battle the RTX 2080 head-to-head. In today’s benchmarks, we’ll look uniquely at Radeon VII cooler mounting pressure, graphite thermal pad versus paste performance, gaming benchmarks, overclocking, noise, power consumption, Luxmark OpenCL performance, and more.
We already took apart AMD’s Radeon VII card, remarking on its interesting Hitachi HM03 graphite thermal pad and vapor chamber. We also analyzed its VRM and PCB, showing impressive build quality from AMD. These are only part of the story, though – the more important aspect is the silicon, which we’re looking at today. At $700, Radeon VII is positioned against the RTX 2080 and now-discontinued GTX 1080 Ti (the two tested identically). Radeon VII has some interesting use cases in “content creation” (or Adobe Premiere, mostly) where GPU memory becomes a limiting factor. Due to time constraints following significant driver-related setbacks in testing, we will be revisiting the card with a heavier focus on these “content creator” tests. For now, we are focusing primarily on the following:
The AMD Radeon VII embargo for “unboxings” has lifted and, although we don’t participate in the marketing that is a content-filtered “unboxing,” a regular part of our box-opening process involves taking the product apart. For today, restrictions are placed on performance discussion and product review, but we are free to show the product and handle it physically. You’ll have to check back for the review, which should likely coincide with the release date of February 7.
This content is primarily video, as our tear-downs show the experience of taking the product apart (and discoveries as we go), but we’ll recap the main point of interest here. Text continues after the embedded video:
GPU manufacturer Visiontek is old enough to have accumulated a warehouse of unsold, refurbished cards. Once in a while, they’ll clear stock by selling them off in cheap mystery boxes. It’s been a long time since we last reported on these boxes, and GPU development has moved forward quite a bit, so we wanted to see what we could get for our money. PCIe cards were $10 for higher-end and $5 for lower, and AGP and PCI cards were both $5. On the off chance that Visiontek would recognize Steve’s name and send him better-than-average cards, we placed two identical orders, one in Steve’s name and one in mine (Patrick). Each order was for one better PCIe card, one worse, one PCI, and one AGP.
News for this week primarily focused on the industry, as opposed to products, and so highlighted AMD earnings, Microsoft earnings, and NVIDIA earnings. There are interesting stories within each of these topics: For Microsoft, the company indirectly blamed Intel's CPU shortage as impacting its growth projections for Windows 10; for NVIDIA, GPU sales slow-downs are still impacting the bottom line, and the company has adjusted its revenue projections accordingly; for AMD, the company saw an uptick for 4Q18, but is facing a slow quarter for 1Q19.
Beyond these stories, areas of interest include an AI white-hat hacking machine (named "Mayhem," a water-cooled supercomputer), Intel expansions and investments, and Intel's sort-of-new CEO.
Show notes below the embedded video, as always.
The Intel Xeon W-3175X CPU is a 28-core, fully unlocked CPU capable of overclocking, a rarity among Xeon parts. The CPU’s final price ended up at $3000, with motherboards TBD. As of launch day – that’s today – the CPU and motherboards will be going out to system integrator partners first, with DIY channels to follow at a yet-to-be-determined date. This makes reviewing the 3175X difficult, seeing as we don’t yet know pricing of the rest of the parts in the ecosystem (like the X599 motherboards), and seeing as availability will be scarce for the DIY market. Still, the 3175X is first a production CPU and second an enthusiast CPU, so we set forth with overclocking, Adobe Premiere renders, Blender tests, Photoshop benchmarking, gaming, and power consumption tests.
Hardware news coverage largely focuses on silicon fabrication this week, with TSMC boasting revenue growth from 7nm production, Intel planning its own 7nm and EUV renovations in US facilities, and other manufacturers getting on-board the 7nm and EUV production train. Beyond this news, we cover a class action lawsuit against AMD for Bulldozer, Samsung's new 970 SSDs, and Backblaze's hard drive reliability report. Note further that GN is in the news, as we're planning a liquid nitrogen (LN2) overclocking livestream for Sunday, 1/27 at 1PM EST. We will have a special guest present.
Show notes below the embedded video, as always.
In a post-Linum TI world, it’s likely that a lot of you look at system integrators a little differently – or, more likely, exactly the same. After we began our Walmart system review, we put in a last-minute, rushed order for an iBUYPOWER RDY system with significantly better parts than what we could get in the Walmart build. This was before Linus had begun his series, too, and so all we knew was that the parts listing included a 9700K instead of an 8700, clearly an improvement, and an RTX 2080 instead of a GTX 1080 Ti, and iBUYPOWER did this at a lower price. The question was whether or not the assembly was any good and if any other mistakes were made along the way.
Before starting on this one, we need a trip down memory lane: We had just ordered the Walmart system, originally meant to be an i7-8700 non-K CPU with GTX 1080 Ti, and had paid over $2000 to get it. Of course, that fateful order ended up being accidentally shipped with an 8700 with a GTX 1070 and was actually the $1500 SKU, but close enough. The motherboard was an H310 platform that runs a slower DMI and only one DIMM per channel, the case had literally 3-4mm of space between the glass and the front panel, and the USB3 cable was held in with glue. Off to a good start.
The AMD R9 290X, a 2013 release, was the once-flagship of the 200 series, later superseded by the 390X refresh, (sort of) the Fury X, and eventually the RX-series cards. The R9 290X typically ran with 4GB of memory, although the 390X made 8GB somewhat commonplace, and was a strong performer for early 1440p gaming and high-quality 1080p gaming. The goal posts have moved, of course, as time has mandated that games get more difficult to render, but the 290X is still a strong enough card to warrant a revisit in 2019.
The R9 290X still has some impressive traits today, and those influence results to a point of being clearly visible at certain resolutions. One of the most noteworthy features is its 64 count of ROPs, where the output is converted into a bitmapped image, and its 176 TMUs. The ROPs assist in improving performance scaling as resolution increases, something that also correlates with higher anti-aliasing values (same idea – sampling more times per pixel or drawing more pixels). For this reason, we’ll want to pay careful attention to performance scaling at 1080p, 1440p, and 4K versus some other device, like the RX 580. The RX 580 is a powerful card for its price-point, often managing comparable performance to the 290X while running half the ROPs and 144 TMUs, but the 290X can close the gap (mildly) at higher resolutions. This isn’t particularly useful to know, but is interesting, and illustrates how specific parts of the GPU can change the performance stack under different rendering conditions.
Today, we’re testing with a reference R9 290X that’s been run through both stock and overclocked, giving us a look at the bottom-end performance and average partner model or OC performance. This should cover most the spectrum of R9 290X cards.
For this hardware news episode, we compiled more information ascertained at CES, whereupon we tried to validate or invalidate swirling rumors about Ryzen 3000, GTX 1660 parts, and Ice Lake. The show gave us a good opportunity, as always, to talk with people in the know and learn more about the goings-on in the industry. There was plenty of "normal" news, too, like DRAM price declines, surges in AMD notebook interest, and more.
The show notes are below the video. This time, we have a few stories in the notes below that didn't make the cut for the video.
CES posed the unique opportunity to speak with engineers at various board manufacturers and system integrators, allowing us to get first-hand information as to AMD’s plans for the X570 chipset launch. We already spoke of the basics of X570 in our initial AMD CES news coverage, primarily talking about the launch timing challenges and PCIe 4.0 considerations, but can now expand on our coverage with new information about the upcoming Ryzen 3000-series chipset for Zen2 architecture desktop CPUs.
Thus far, the information we have obtained regarding Ryzen 3000 points toward a likely June launch month, probably right around Computex, with multiple manufacturers confirming the target. AMD is officially stating “mid-year” launch, allowing some leniency for changes in scheduling, but either way, Ryzen 3000 will launch in about 5 months.
The biggest point of consideration for launch has been whether AMD wants to align its new CPUs with an X570 release, which is presently the bigger hold-up of the two. It seems likely that AMD would want to launch both X570 motherboards and Ryzen 3000 CPUs simultaneously, despite the fact that the new CPUs will work with existing motherboards provided they’ve received a BIOS update.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.