The Intel Xeon W-3175X CPU is a 28-core, fully unlocked CPU capable of overclocking, a rarity among Xeon parts. The CPU’s final price ended up at $3000, with motherboards TBD. As of launch day – that’s today – the CPU and motherboards will be going out to system integrator partners first, with DIY channels to follow at a yet-to-be-determined date. This makes reviewing the 3175X difficult, seeing as we don’t yet know pricing of the rest of the parts in the ecosystem (like the X599 motherboards), and seeing as availability will be scarce for the DIY market. Still, the 3175X is first a production CPU and second an enthusiast CPU, so we set forth with overclocking, Adobe Premiere renders, Blender tests, Photoshop benchmarking, gaming, and power consumption tests.
Hardware news coverage largely focuses on silicon fabrication this week, with TSMC boasting revenue growth from 7nm production, Intel planning its own 7nm and EUV renovations in US facilities, and other manufacturers getting on-board the 7nm and EUV production train. Beyond this news, we cover a class action lawsuit against AMD for Bulldozer, Samsung's new 970 SSDs, and Backblaze's hard drive reliability report. Note further that GN is in the news, as we're planning a liquid nitrogen (LN2) overclocking livestream for Sunday, 1/27 at 1PM EST. We will have a special guest present.
Show notes below the embedded video, as always.
In a post-Linum TI world, it’s likely that a lot of you look at system integrators a little differently – or, more likely, exactly the same. After we began our Walmart system review, we put in a last-minute, rushed order for an iBUYPOWER RDY system with significantly better parts than what we could get in the Walmart build. This was before Linus had begun his series, too, and so all we knew was that the parts listing included a 9700K instead of an 8700, clearly an improvement, and an RTX 2080 instead of a GTX 1080 Ti, and iBUYPOWER did this at a lower price. The question was whether or not the assembly was any good and if any other mistakes were made along the way.
Before starting on this one, we need a trip down memory lane: We had just ordered the Walmart system, originally meant to be an i7-8700 non-K CPU with GTX 1080 Ti, and had paid over $2000 to get it. Of course, that fateful order ended up being accidentally shipped with an 8700 with a GTX 1070 and was actually the $1500 SKU, but close enough. The motherboard was an H310 platform that runs a slower DMI and only one DIMM per channel, the case had literally 3-4mm of space between the glass and the front panel, and the USB3 cable was held in with glue. Off to a good start.
The AMD R9 290X, a 2013 release, was the once-flagship of the 200 series, later superseded by the 390X refresh, (sort of) the Fury X, and eventually the RX-series cards. The R9 290X typically ran with 4GB of memory, although the 390X made 8GB somewhat commonplace, and was a strong performer for early 1440p gaming and high-quality 1080p gaming. The goal posts have moved, of course, as time has mandated that games get more difficult to render, but the 290X is still a strong enough card to warrant a revisit in 2019.
The R9 290X still has some impressive traits today, and those influence results to a point of being clearly visible at certain resolutions. One of the most noteworthy features is its 64 count of ROPs, where the output is converted into a bitmapped image, and its 176 TMUs. The ROPs assist in improving performance scaling as resolution increases, something that also correlates with higher anti-aliasing values (same idea – sampling more times per pixel or drawing more pixels). For this reason, we’ll want to pay careful attention to performance scaling at 1080p, 1440p, and 4K versus some other device, like the RX 580. The RX 580 is a powerful card for its price-point, often managing comparable performance to the 290X while running half the ROPs and 144 TMUs, but the 290X can close the gap (mildly) at higher resolutions. This isn’t particularly useful to know, but is interesting, and illustrates how specific parts of the GPU can change the performance stack under different rendering conditions.
Today, we’re testing with a reference R9 290X that’s been run through both stock and overclocked, giving us a look at the bottom-end performance and average partner model or OC performance. This should cover most the spectrum of R9 290X cards.
For this hardware news episode, we compiled more information ascertained at CES, whereupon we tried to validate or invalidate swirling rumors about Ryzen 3000, GTX 1660 parts, and Ice Lake. The show gave us a good opportunity, as always, to talk with people in the know and learn more about the goings-on in the industry. There was plenty of "normal" news, too, like DRAM price declines, surges in AMD notebook interest, and more.
The show notes are below the video. This time, we have a few stories in the notes below that didn't make the cut for the video.
CES posed the unique opportunity to speak with engineers at various board manufacturers and system integrators, allowing us to get first-hand information as to AMD’s plans for the X570 chipset launch. We already spoke of the basics of X570 in our initial AMD CES news coverage, primarily talking about the launch timing challenges and PCIe 4.0 considerations, but can now expand on our coverage with new information about the upcoming Ryzen 3000-series chipset for Zen2 architecture desktop CPUs.
Thus far, the information we have obtained regarding Ryzen 3000 points toward a likely June launch month, probably right around Computex, with multiple manufacturers confirming the target. AMD is officially stating “mid-year” launch, allowing some leniency for changes in scheduling, but either way, Ryzen 3000 will launch in about 5 months.
The biggest point of consideration for launch has been whether AMD wants to align its new CPUs with an X570 release, which is presently the bigger hold-up of the two. It seems likely that AMD would want to launch both X570 motherboards and Ryzen 3000 CPUs simultaneously, despite the fact that the new CPUs will work with existing motherboards provided they’ve received a BIOS update.
Today we’re reviewing the RTX 2060, with additional tests on if an RTX 2060 has enough performance to really run games with ray-tracing – basically Battlefield, at this point – on the TU106 GPU. We have a separate tear-down going live showing the even more insane cooler assembly of the RTX 2060, besting the previous complexity of the RTX 2080 Ti, but today’s focus will be on performance in gaming, thermals, RTX performance, power consumption, and acoustics of the Founders Edition cooler.
The RTX 2060 Founders Edition card is priced at $350 and, unlike previous FE launches in this generation, it is also the price floor. Cards will start at $350 – no more special FE pricing – and scale based upon partner cost. We will primarily be judging price-to-performance based upon the $350 point, so more expensive cards would need to be judged independently.
Our content outline for this RTX 2060 review looks like this:
- Games: DX12, DX11
- RTX in BF V
We’re putting more effort into the written conclusion for this one than typically, so be sure to check that as well. Note that we have a separate video upload on the YouTube channel for a tear-down of the card. The PCB, for the record, is an RTX 2070 FE PCB. Same thing.
The XFX RX 590 Fatboy is a card we tore-down a few months ago, whereupon we complained about its thermal solution and noted inefficiencies in the design. These proved deficient in today’s testing, as expected, but the silicon itself – AMD’s GPU – remained a bit of a variable for us. The RX 590 GPU, ignoring XFX and its component of the review (momentarily), is potentially a stronger argument between the GTX 1060 and GTX 1070. It’s a pre-pre-overclocked RX 480 – or a pre-overclocked RX 580 – and, to AMD’s credit, it has pushed this silicon about as far is it can go.
Today, we’re benchmarking the RX 590 (the “Fatboy” model, specifically) against the GTX 1060, RX 580 overclocked, GTX 1070, and more.
CES is next week, beginning roughly on Monday (with some Sunday press conferences), and so it's next week that will really be abuzz with hardware news. That'll be true to the extent that most of our coverage will be news, not reviews (some exceptions), and so we'd encourage checking back regularly to stay updated on 2019's biggest planned product launches. Most of our news coverage will go up on the YouTube channel, but we are still working on revamping the site here to improve our ability to post news quickly and in written format.
Anyway, the past two weeks still deserve some catching-up. Of major note, NVIDIA is dealing with a class action complaint, Intel is dropping its IGP for some SKUs, and OLED gaming monitors are coming.
Today’s benchmark is a case study by the truest definition of the phrase: We are benchmarking a single sample, overweight video card to test the performance impact of its severe sag. The Gigabyte GTX 1080 Ti Xtreme was poorly received by our outlet when we reviewed it in 2017, primarily for its needlessly large size that amounted to worse thermal and acoustic performance than smaller, cheaper competitors. The card is heavy and constructed using through-bolts and complicated assortments of hardware, whereas competition achieved smaller, more effective designs that didn’t sag.
As is tradition, we put the GTX 1080 Ti Xtreme in one of our production machines alongside all of the other worst hardware we worked with, and so the 1080 Ti Xtreme was in use in a “real” system for about a year. That amount of time has allowed nature – mostly gravity – to take its course, and so the passage of time has slowly pulled the 1080 Ti Xtreme apart. Now, after a year of forced labor in our oldest rendering rig, we get to see the real side-effects of a needlessly heavy card that’s poorly reinforced internally. We’ll be testing the impact of GPU sag in today’s content.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.