“Indecision” isn’t something we’ve ever titled a review, or felt in general about hardware. The thing is, though, that Vega is launching in the midst of a market which behaves completely unpredictably. We review products as a value proposition, looking at performance to dollars and coming to some sort of unwavering conclusion. Turns out, that’s sort of hard to do when the price is “who knows” and availability is uncertain. Mining does all this, of course; AMD’s launching a card in the middle of boosted demand, and so prices won’t stick for long. The question is whether the inevitable price hike will match or exceed the price of competing cards. NVidia's GTX 1070 should be selling below $400 (a few months ago, it did), the GTX 1080 should be ~$500, and the RX Vega 56 should be $400.
Conclusiveness would be easier with at least one unchanging value.
The launch of Threadripper marks a move closer to AMD’s starting point for the Zen architecture. Contrary to popular belief, AMD did not start its plans with desktop Ryzen and then glue modules together until Epyc was created; no, instead, the company started with an MCM CPU more similar to Epyc, then worked its way down to Ryzen desktop CPUs. Threadripper is the fruition of this MCM design on the HEDT side, and benefits from months of maturation for both the platform and AMD’s support teams. Ryzen was rushed in its weeks leading to launch, which showed in both communication clarity and platform support in the early days. Finally, as things smoothed-over and AMD resolved many of its communication and platform issues, Threadripper became advantaged in its receipt of these improvements.
“Everything we learned with AM4 went into Threadripper,” one of AMD’s representatives told us, and that became clear as we continued to work on the platform. During the test process for Threadripper, work felt considerably more streamlined and remarkably free of the validation issues that had once plagued Ryzen. The fact that we were able to instantly boot to 3200MHz (and 3600MHz) memory gave hope that Threadripper would, in fact, be the benefactor of Ryzen’s learning pains.
Threadripper will ship in three immediate SKUs:
Respectively, these units are targeted at price-points of $1000, $800, and $550, making them direct competitors to Intel’s new Skylake-X family of CPUs. The i9-7900X would be the flagship – for now, anyway – that’s being more heavily challenged by AMD’s Threadripper HEDT CPUs. Today's review looks at the AMD Threadripper 1950X and 1920X CPUs in livestreaming benchmarks, Blender, Premiere, power consumption, temperatures, gaming, and more.
AMD’s Ryzen lineup mirrors traits at both the R3 and R7 ranges, where both series of CPUs are effectively the same inter-lineup, but with different clock speeds. The R7 CPUs largely all clock to about the same area (+/-200MHz) and consist of the same features. The same can be said for the two R3 SKUs – the R3 1200 and R3 1300X – where the CPUs are functionally identical outside of frequency. This means that, like with the R7 1700, the R3 1200 has potential to challenge and replace the 1300X for users willing to overclock. Remember: A basic overclock on this platform is trivial and something we strongly encourage for our audience. The cost savings are noteworthy when driving an R7 1700 up to 1700X or 1800X levels, and the same can likely be said about the R3 1200.
That’s what we’re finding out today, after all. Our R3 1200 review follows the review of the 1300X and aims to dive into gaming performance, overclocking performance, production applications, and power consumption. Nearby CPUs of note include the 1300X, the Pentium G4560, the R5 series CPUs, and the i3 CPUs.
AMD’s R3 1200 is a $110 part, making it $20 cheaper than the R3 1300X and significantly cheaper than both the i5 and R5 CPUs. Frequency is also down: The R3 1200 clocks at 3.1GHz base / 3.4GHz boost on its 4C/4T design, lower than the R3 1300X that we just reviewed.
The Ryzen 3 CPUs round-out AMD’s initial Ryzen offering, with the last remaining sector covered by an impending Threadripper roll-out. Even before digging into the numbers of these benchmarks, AMD’s R3 & R5 families seem to have at least partly influenced competitive pricing: The Intel i3-7350K is now $150, down from its $180 perch. We liked the 7350K as a CPU and were excited about its overclocking headroom, but found its higher price untenable for an i3 CPU given then-neighboring i5 alternatives.
Things have changed significantly since the i3-7350K review. For one, Ryzen now exists on market – and we’ve awarded the R5 1600X with an Editor’s Choice award, deferring to the 1600X over the i5-7600K in most cases. The R3 CPUs are next on the block, and stand to challenge Intel’s freshly price-reduced i3-7350K in budget gaming configurations.
Thermaltake has released its Core G21 TG (tempered glass) Edition case, and it’s only $70 -- more proof that glass panels don’t need to be expensive. Despite the name, there’s no product listing for a non-TG Edition G21, although the View 21-TG that was displayed alongside it at Computex shares the same tooling with a different front panel.
Today’s review looks at the Thermaltake Core G21 TG case for build quality, thermals, and acoustics, with additional testing on optimal fan placement and fan configurations.
Zotac's GTX 1080 Ti AMP! Extreme is one of the largest GTX 1080 Ti cards on the market, rivaling the Gigabyte Aorus Xtreme card in form factor. The card uses nearly three expansion slots, runs a long PCB and cooler, and hosts a dense aluminum heatsink with a three-fan cooler. This card runs $750 to $770, depending on if the “Core” edition is purchased. The only difference is the out-of-box clock, but all these 1080 Tis perform mostly the same in games (once solving for thermals).
For its VRM, Zotac takes a brute-force approach to the 1080 Ti, using a doubled-up 8-phase (16 phases total) with rebranded QN3107 and QN3103 MOSFETs, operating on a UP9511 in 8-phase mode. The VRM is the reason for the tall card, with two phases tucked off to the side (under a small aluminum heatsink that's isolated from all other cooling). This theoretically helps distribute the heat load better across a larger surface area, which Zotac then cools using a small aluminum fin stack that's isolated from the denser aluminum fin array. Above the VRM's isolated heatsink rests a rubber damper, which doesn't fully make contact (and is presumably to prevent scratching in the event of over-flex during installation, as it otherwise does nothing), and then the three fans.
Above: Contactless rubber bumper above the MOSFET heatsink.
The card is one of the heaviest, largest cards we've looked at this generation. To give some perspective, Zotac's AMP Extreme is about 1” thicker than a 2-slot card (like the reference card), is longer than the Aorus Xtreme, and is heavy from the mass of aluminum resting atop the GPU. Learn more about the inner-workings of this card in our tear-down.
For today, we're focusing on thermals, power, and noise, as that's the biggest difference between any of these 1080 Ti cards. The gaming performance and overclocking performance, sans Kingpin/Lightning cards, is not notably different.
“Disillusioned and confused” could describe much of the response to initial AMD Vega: Frontier Edition testing and reviews. The card’s market positioning is somewhat confusing, possessing neither the professional-level driver certification nor the gaming-level price positioning. This makes Vega: FE ($1000) a very specifically placed card and, like the Titan Xp, doesn’t exactly look like the best price:performance argument for a large portion of the market. But that’s OK – it doesn’t have to be, and it’s not trying to be. The thing is, though, that AMD’s Vega architecture has been so long hyped, so long overdue, that users in our segment are looking for any sign of competition with nVidia’s high-end. It just so happens that, largely thanks to AMD’s decision to go with “Vega” as the name of its first Vega arch card, the same users saw Vega: FE as an inbound do-all flagship.
But it wasn’t really meant to compete under those expectations, it turns out.
Today, we’re focusing our review efforts most heavily on power, thermals, and noise, with the heaviest focus on power and thermals. Some of this includes power draw vs. time charts, like when Blender is engaged in long render cycles, and other tests include noise-normalized temperature testing. We’ve also got gaming benchmarks, synthetics (FireStrike, TimeSpy), and production benchmarks (Maya, 3DS Max, Blender, Creo, Catia), but those all receive less focus than our primary thermal/power analysis. This focus is because the thermal and power behavior can be extrapolated most linearly to Vega’s future supplements, and we figure it’s a way to offer a unique set of data for a review.
The SilverStone Kublai 07 (a.k.a. the SST-KL07B, in keeping with SilverStone’s difficult-to-Google naming conventions), is a relatively inexpensive competitor in the silent mid-tower category typically occupied by Fractal and Be Quiet! Cases.
Today, the SilverStone Kublai KL07 is on the bench for review versus the Fractal Define C, Be Quiet! Pure Base 600, NZXT S340 Elite, and several other recent case launches. We’ll be looking at noise and thermals primarily, with some additional focus on ease-of-installation and build quality.
Whenever a new keyboard enters the lab, we always make an effort to ignore its price. Completely. Instead, we simply sit down and type. This helps to first see the flaws and strengths of the keyboard without subconsciously comparing them to some price point. We then get to decide what the keyboard should cost, how that compares to its real price, and how that compares to its competition.
After using the Patriot Viper V770, we were overall mildly impressed, but a bit disappointed. It’s a decent keyboard with unique features, but those coupled with some flaws and a mediocre price of $120 result in it falling flat in comparison to competition below, at, and above its price point.
We asked Intel why Kaby Lake-X exists at its recent press day, challenging that the refreshed 7700 & 7600 CPUs can’t be used on LGA1151 sockets, that they aren’t significantly different from the predecessors, and that LGA2066 boards are way more expensive. The socket and chipset alone have a higher BOM cost for manufacturers than 200-series boards, and that cost is passed on to consumers. That’s not free. The consumer also pays for the components that won’t go unused, like the trace routing for half of the DIMMs (and the physical slots).
But Intel gave us an answer to that query.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.