After our exhaustive in-person interview with Principled Technologies, published on our YouTube channel, we followed-up via email to clarify some questions that were left unanswered during the initial video. As we noted in the initial interview, we give credit to Principled Technologies for endeavoring to sit down with us for these discussions. We recognize that it is not an easy decision to make – one of ignoring the problem (us showing up unannounced, in this instance) or confronting it – and we appreciate PT’s willingness to partake in a rational discussion about test methodology.
For full details of the interview, check the embedded video below. This written accompaniment aims to address follow-up questions where PT technicians closer to the testing had to be consulted. We are not going to transcribe the 40-minute interview and encourage that you watch the content to gain full perspective on both primary sides of the debate. We have timestamped key points in the video (timestamps are rendered within the video).
Following Intel’s 28C CPU announcement from earlier today, the company also announced its i9-9900K CPU and “9th Gen” desktop CPUs. On stage, Intel dubbed its 9900K “the best gaming processor in the world – period,” before holding up the CPU in new packaging that clearly targets the Threadripper 2 packaging. Intel also declared that “we are breaking the laws of physics to bring you these parts,” which is clearly vehemently false, but we do get the point. The more correct phrasing is that “they’re fast.” We get the point, though.
Intel today announced its Xeon W-3175X 28C/56T CPU, not to be confused with the previously demonstrated 28C HEDT Skylake-X CPU from Computex. The CPU targets workstation users on Xeon platforms. Its intended use is for production, like Blender (a tool we use for our own animations) and other heavily multithreaded render applications. As these are heavily core-dependent, the use case is more pronounced than in production software like Premiere, which is frequency-dependent.
For frequency, the 28C/56T Xeon part operates at a native boost frequency of 4.3GHz – but Intel did not specify if this is single-core or all-core. It’s almost certainly single-core Turbo, leaving us uncertain as to the frequency in all-core boost.
Fractal’s newest case officially released under the name of “Define S2,” but our review has been slightly delayed by the office turning into an overclocking war zone. Fractal has hit a comfortable stride with their cases. The S2 is a successor to the Define S, but to all appearances it’s almost exactly the same as the Define R6, which we reviewed about a year ago. That’s not necessarily a bad thing, though: the R6 is a good case and received praise from us for its high build quality and stout form factor.
The Fractal Define S2 case is the R6, ultimately, just with a lot of parts removed. It’s a stripped-down version of the R6 with some optional reservoir mounts and a new front panel, with rough equivalence in MSRP and ~$10 to ~$50 differences in street price. The R6 and S2 are the most direct competitors for each other, so if choosing specifically between these two, Fractal can’t lose. There are, of course, many good cases in the $150 price range, but the R6 and S2 most immediately contend with one another.
We reviewed the behemoth Cooler Master Cosmos C700P almost exactly a year ago, and now CM is back with the even heavier 51.6lb C700M. Like the H500M versus the H500P, this is a higher-end and more expensive model being added to a family of cases rather than replacing them. The new flagship has a few upgrades over the original, but it retains the same basic look with pairs of big aluminum rails at the top and bottom and dual-curved side panels.
Cooler Master’s C700M is very much a halo product, but our review of the C700M will focus on build quality, thermals, acoustics, and cable management. Ultimately, this is a showpiece -- it’s something one might buy because they can afford it, and that’s good enough reason. We will still be reviewing the Cooler Master C700M on its practical merits as an enclosure, as always, but are also taking into consideration its status as a halo product -- that is, something from which features will be pulled to the low-end later.
We've been working hard at building our second iteration of the RIPJAY bench, last featured in a livestream where we beat JayzTwoCents' score in TimeSpy Extreme, taking first place worldwide for a two-GPU system. Since then, Jay has beaten our score -- primarily with water and direct AC cooling -- and we have been revamping our setup to fire back at his score. More on that later this week.
In actual news, though, it's still been busy: RAM prices are behaving in a bipolar fashion, bouncing around based on a mix of supply, demand, and manufacturers trying to maintain high per-unit margins. Intel, meanwhile, is still combating limited supply of its now-strained 14nm process, resulting in some chipsets getting stepped-back to 22nm. AMD is also facing shortages for its A320 and B450 chipsets, though this primarily affects China retail. We also received word of several upcoming launches from Intel, AMD, and NVIDIA -- the RTX 2070 and Polaris 30 news (the latter is presently a rumor) being the most interesting.
You may have heard about the new tariffs impacting PC component prices by now, with increases upwards of 10% to 25% by January 1st of 2019. We’ve spoken with several companies and individuals in the industry to better understand how PC builders can expect prices to increase. On our list of those providing insight is EVGA CEO Andrew Han, NZXT, SilverStone, and Alphacool, among off-record insight from others. In the very least, North American buyers can anticipate price increases as a result of the current administration’s new tariffs – it’s just a question of how much of that is passed on to the consumer.
Here’s the “TLDR” of the tariffs: Nearly every computer component is affected in North America, and those prices can reach outward to other regions as companies try to stabilize for a downtrend in overall revenue. The tariffs were pushed into law by the US Federal Government, with the first 10% taking effect on October 1st of 2018. After this, an additional 15% tariff will be mandated by the US government on January 1st of 2019.
We always like to modify the reference cards – or “Founders Edition,” by nVidia’s new naming – to determine to what extent a cooler might be holding it back. In this instance, we suspected that the power limitations may be a harder limit than cooling, which is rather sad, as the power delivery on nVidia’s RTX 2080 Ti reference board is world-class.
We recently published a video showing the process, step-by-step, for disassembling the Founders Edition cards (in preparation for water blocks). Following this, we posted another piece wherein we built-up a “Hybrid” cooling version of the card, using a mix of high-RPM fans and a be quiet! Silent Loop 280 CLC for cooling the GPU core on a 2080 Ti FE card. Today, we’re summarizing the results of the mod.
NVidia’s support of its multi-GPU technology has followed a tumultuous course over the years. Following a heavy push for adoption (that landed flat with developers), the company shunted its own SLI tech with Pascal, where multi-GPU support was cut-down to two devices concurrently. Even in press briefings, the company acknowledged waning interest and support in multi-GPU, and so the marketing efforts died entirely with Pascal. Come Turing, a renewed interest in creating multiple-purchasers has spurred development effort to coincide with NVLink, a 100GB/s symmetrical interface for the 2080 Ti. On the 2080, this still maintains a 50GB/s bus. It seems that nVidia may be pushing again for multi-GPU, and NVLink could further enable actual performance scaling with 2x RTX 2080 Tis or RTX 2080s (conclusions notwithstanding). Today, we're benchmarking the RTX 2080 Ti with NVLink (two-way), including tests for PCIe 3.0 bandwidth limitations when using x16/x8 or x8/x8 vs. x16/x16. The GTX 1080 Ti in SLI is also featured.
Note that we most recently visited the topic of PCIe bandwidth limitations in this post, featuring two Titan Vs, and must again revisit this topic. We have to determine whether an 8086K and Z370 platform will be sufficient for benchmarking with multi-GPU, i.e. in x8/x8, and so that requires another platform – the 7980XE and X299 DARK that we used to take a top-three world record previously.
It’s more “RTX OFF” than “RTX ON,” at the moment. The sum of games that include RTX-ready features on launch is 0. The number of tech demos is growing by the hour – the final hours – but tech demos don’t count. It’s impressive to see what nVidia is doing in its “Asteroids” mesh shading and LOD demonstration. It is also impressive to see the Star Wars demo in real-time (although we have no camera manipulation, oddly, which is suspect). Neither of these, unfortunately, are playable games, and the users for whom the RTX cards are presumably made are gamers. You could then argue that nVidia’s Final Fantasy XV benchmark demo, which does feature RTX options, is a “real game” with the technology – except that the demo is utterly, completely untrustworthy, even though it had some of its issues resolved previously (but not all – culling is still dismal).
And so we’re left with RTX OFF at present, which leaves us with a focus primarily upon “normal” games, thermals, noise, overclocking on the RTX 2080 Founders Edition, and rasterization.
We don’t review products based on promises. It’s cool that nVidia wants to push for new features. It was also cool that AMD did with Vega, but we don’t cut slack for features that are unusable by the consumer.
The new nVidia RTX 2080 and RTX 2080 Ti reviews launch today, with cards launching tomorrow, and we have standalone benchmarks going live for both the RTX 2080 Founders Edition and RTX 2080 Ti Founders Edition. Additional reviews of EVGA’s XC Ultra and ASUS’ Strix will go live this week, with an overclocking livestream starting tonight (9/19) at around 6-7PM EST starting time. In the meantime, we’re here to start our review series with the RTX 2080 FE card.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.