We’ve praised the R7 1700 ($330) for its mixed workload performance and overclocking capabilities at $330, and we’ve criticized the 1800X for its insignificant performance improvements (over the 1700) at $500. That leaves the R7 1700X ($400), positioned precariously between the two with a base clock of 3.4GHz, but the full 95W TDP of its 1800X sibling.
The 1700X performs as expected, given its flanks, landing between the R7 1700 and R7 1800X. All three are 8C/16T chips with the same CCX layout; refer back to our 1800X review for a more thorough description of the R7 CPU & Ryzen architecture. A quick comparison of basic stats reveals that the major advantage of the 1700X is a moderate increase in frequency, with additional XFR headroom as demarcated by the ‘X’ suffix. That said, our R7 1700 easily overclocked to a higher frequency than the native 1700X frequency, with no manual adjustment to voltage or EFI beyond the multiplier. The 1700X has a base clock of 3.4GHz and a boost clock of 3.8GHz, which theoretically means it could come close to the performance of our 3.9GHz 1700 straight out of the box while retaining the benefits of XFR (circumvented by overclocking).
The GTX 1080 Ti posed a fun opportunity to roll-out our new GPU test bench, something we’ve been working on since end of last year. The updated bench puts a new emphasis on thermal testing, borrowing methodology from our EVGA ICX review, and now analyzes cooler efficacy as it pertains to non-GPU components (read: MOSFETs, backplate, VRAM).
In addition to this, of course, we’ll be conducting a new suite of game FPS benchmarks, running synthetics, and preparing for overclocking and noise. The last two items won’t make it into today’s content given PAX being hours away, but they’re coming. We will be starting our Hybrid series today, for fans of that. Check here shortly for that.
If it’s not obvious, we’re reviewing nVidia’s GTX 1080 Ti Founders Edition card today, follow-up to the GTX 1080 and gen-old 980 Ti. Included on the benches are the 1080, 1080 Ti, 1070, 980 Ti, and in some, an RX 480 to represent the $250 market. We’re still adding cards to this brand new bench, but that’s where we’re starting. Please exercise patience as we continue to iterate on this platform and build a new dataset. Last year’s was built up over an entire launch cycle.
AMD’s R7 1700 CPU ($330) immediately positions itself in a more advantaged segment than its $500 1800X companion, which proved poor value for pure gaming machines in our tests. Of course, as we said previously (page 5, 8), the 1800X makes more sense for our tested production tasks than the $1000 6900K when considering price:performance. For gaming, both are poor choices; the 1800X performs on par with i5 CPUs in game benchmarks, and the 6900K is $1000. It’s about value, not raw performance: Multiplicative increments in price to achieve performance equivalence (gaming) to cheaper chips is not good value. Before venturing into the 1440p/4K argument, we’d encourage you to read this review. The R7 1700 – by nature of that very argument, but also by nature of a trivial overclock – effectively invalidates the 1800X for gaming machines, finally granting AMD its champion for Ryzen.
We are also restricting this review to one page, as a significant portion of readers had unfortunately skipped straight to the gaming results page without context. It’s not as good for formatting or page load times, but it’ll hopefully ensure the other content is at least scrolled past, even if still ignored altogether.
Enough of that.
In this AMD R7 1700 review, we look at the price-to-performance of AMD’s new $330 CPU, which was explicitly marketed as an i7-7700K counter in price/performance when presented at AMD’s tech day. We’re benchmarking the R7 1700 in our usual suite of gaming, synthetic, and render tasks, quickly validating average auto voltages and temperatures along the way. Overclocks and SMT toggling further complicate testing, but provide a look at how the R7 1700 is capable of eliminating the gap between AMD’s own flagship and its more affordable SKU.
Intel has enjoyed relatively unchallenged occupancy of the enthusiast CPU market for several years now. If you mark the FX-8350 as the last major play prior to subsequent refreshes (like the FX-8370), that marks the last major AMD CPU launch as 2012. Of course, later launches in the FX-9000 series and FX-8000 series updates have been made, but there has not been an architectural push since the Bulldozer/Piledriver/Steamroller series.
AMD Ryzen, then, has understandably generated an impregnable wall of excitement from the enthusiast community. This is AMD’s chance to recover a market it once dominated, back in the Athlon x64 days, and reestablish itself in a position that minimally targets parity in price to performance. That’s all AMD needs: Parity. Or close to it, anyway, while maintaining comparable pricing to Intel. With Intel’s stranglehold lasting as long as it has, builders are ready to support an alternative in the market. It’s nice to claim “best” on some charts, like AMD has done with Cinebench, but AMD doesn’t have to win: they have to tie. The momentum to shift is there.
Even RTG competitor nVidia will benefit from this upgrade cycle. That’s not something you hear a lot – nVidia wanting AMD to do well with a launch – but here, it makes sense. A dump of new systems into the ecosystem means everyone experiences revenue growth. People need to buy new GPUs, new cases, new coolers, and new RAM to accompany any moves to Ryzen. Misalignment of Vega and Ryzen make sense in the sense of not smothering one announcement with the other, but does mean that AMD is now rapidly moving toward Vega’s launch. Those R7 CPUs don’t necessarily fit best with an RX 480; it’s a fine card, just not something you stick with a $400-$500 CPU. Two major launches in short order, then, one of which potentially drives system refreshes.
AMD must feel the weight borne by Atlas at this moment.
In this ~11,000 word review of AMD’s Ryzen R7 1800X, we’ll look at FPS benchmarking, Premiere & Blender workloads, thermals and voltage, and logistical challenges. (Update: 1700 review here).
The original Sandia & Coolchip style coolers spiked interest in a market segment that’s otherwise relatively stagnant. With a whirling aluminum block serving as both the fan and the heatsink, the cooling concept seemed novel, dangerous, and potentially efficient. That’s a mix to cause some excitement in CPU coolers, which are otherwise the expected mix of metal and air or, if you wanted to get really crazy, liquid, metal, and air.
That concept largely vanished. We haven’t heard much about the use of Sandia-inspired designs since 2014, and certainly haven’t seen any majorly successful executions of either Sandia or Coolchip coolers in the CPU cooling space. Nothing that took the market by force and demanded eyeballs beyond initial tech demos and CES showcases.
Thermaltake decided to take its own stab at this type of cooler, working with Coolchip on technology implementation and execution of the Engine 27 unit that was at CES last month.
Thermaltake’s Engine 27 is $50. It’s a 27mm form factor cooler, meaning it’s one of a select few that could fit in something like a SilverStone PT13 with its 30mm requirement. The direct competition to the Engine 27 is SilverStone’s NT07 and NT08-115XP, the latter of which we’re also testing. This Thermaltake Engine 27 review looks at noise and temperatures versus the SilverStone NT08-115XP & Cryorig C7.
BitFenix’s new flagship case is the Shogun, a “super mid-tower” compatible with up to E-ATX boards, and a thematic successor to the similarly simplistic Shinobi mid-tower. We haven’t covered a BitFenix enclosure since we named the Pandora one of the best mid-range cases of 2015, and we were curious about the company has changed since its LED-laden efforts.
BitFenix made a name for itself with the Prodigy small form factor case a few years ago, and has been trying to recreate that success ever since. The new BitFenix Shogun case is what we’re reviewing today, priced at $160 and targeting the (“super”) mid-tower market with its mix of aluminum, steel, and glass. The case primarily differentiates itself with a slightly user-customizable layout internally, something we’ll talk about in this review.
The Pure Base 600 is the newest, cheapest, and smallest of Be Quiet!’s silent enclosures, but it manages to hold its own in the current lineup. It’s a stark contrast to previous Be Quiet! cases like the Silent Base 800, a chunky enclosure that we found pleasant to work with but fairly expensive at $140.
As a $90 mid tower, the Pure Base 600 fits into the same category as the S340 Elite (reviewed) and other high-end cases, although notably without tempered glass or indeed any side window at all. In fact, Be Quiet! also manages to dodge the entirety of the RGB LED craze, making the Pure Base 600 oddly unique in its “older” approach to case features. Granted, there will be a tempered glass variant in March for an extra $10.
Instead of all these extras, the 600 derives its value from Be Quiet!’s hallmark blend of acoustic foam, rubber grommets, and case fans intended to deaden noise as much as possible. We’ll cover acoustic testing later on, but for now, our first impressions:
GPU diode is a bad means for controlling fan RPM, at this point; it’s not an indicator of total board performance by any stretch of use. GPUs have become efficient enough that GPU-governed PWM for fans means lower RPMs, which means less noise – a good thing – but also worsened performance on the still-hot VRMs. We have been talking about this for a while now, most recently in our in-depth EVGA VRM analysis during the Great Thermal Pad Fracas of 2016. That analysis showed that the thermals were largely a non-issue, but not totally inexcusable. EVGA’s subsequent VBIOS update and thermal pad mods were sufficient to resolve any concern that lingered, though if you’re curious to learn more about that, it’s really worth just checking out the original post.
VBIOS updates and thermal pad mods were not EVGA’s only response to this. Internally, the company set forth to design a new PCB+cooler combination that would better detect high heat operation on non-GPU components, and would further protect said components with a 10A fuse.
In our testing today, we’ll be fully analyzing the efficacy of EVGA’s new “ICX” cooler design, to coexist with the long-standing ACX cooler. In our thermal analysis and review of the EVGA GTX 1080 FTW2 (~$630) & SC2 ICX cards (~$590), we’ll compare ACX vs. ICX coolers on the same card, MOSFET & VRAM temperatures with thermocouples and NTC thermistors, and individual cooler component performance. This includes analysis down to the impact the new backplate makes, among other tests.
Of note: There will be no FPS benchmarks for this review. All ICX cards with SC2 and FTW2 suffixes ship at the exact same base/boost clock-rates as their preceding SC & FTW counterparts. This means that FPS will only be governed by GPU Boost 3.0; that is to say, any FPS difference seen between an EVGA GTX 1080 FTW & EVGA GTX 1080 FTW2 will be entirely resultant of uncontrollable (in test) manufacturing differences at the GPU-level. Such differences will be within a percentage point or two, and are, again, not a result of the ICX cooler. Our efforts are therefore better spent on the only thing that matters with this redesign: Cooling performance and noise. Gaming performance remains the same, barring any thermal throttle scenarios – and those aren’t a concern here, as you’ll see.
The first unlocked i3 CPU, upon its pre-release disclosure to GN, sounded like one of Intel’s most interesting moves for the Kaby Lake generation. Expanding overclocking down to a low/mid-tier SKU could eat away at low-end i5 CPUs, if done properly, and might mark a reprisal of the G3258’s brief era of adoration. The G3258 didn’t hold for long, but its overclocking prowess made the CPU an easy $60-$70 bargain pickup with a small window of high-performance gaming; granted, it did have issues in more multi-threaded games. The idea with the G3258 was to purchase the chip with a Z-series platform, then upgrade a year later with something higher-end.
The i3-7350K doesn’t quite lend itself to that same mindset, seeing as it’s ~$180 and leaves little room between neighboring i5 CPUs. This is something that you buy more permanently than those burner Pentium chips. The i3-7350K is also something that should absolutely only be purchased under the pretense of overclocking; this is not something that should be bought “just in case.” Do or do not – if you’re not overclocking, do not bother to consider a purchase. It’s not uncommon for non-overclockers to purchase K-SKU Core i7 CPUs, generally for desire of “having the best,” but the 7350K isn’t good enough on its own to purchase for that same reason. Without overclocking, it’s immediately a waste.
The question is whether overclocking makes the Intel i3-7350K worthwhile, and that’s what we’ll be exploring in this review’s set of benchmarks. We test Blender rendering, gaming FPS, thermals, and synthetics in today’s review.
For comparison, neighboring non-K Intel products would include the Intel i5-7500 (3.4GHz) for $205, the i3-7100 for $120, and Intel i3-7320 (4.1GHz) for $165. These sandwich the 7350K into a brutal price category, but overclocking might save the chip – we’ll find out shortly.
EVGA’s CLC 120 cooler fell on our bench shortly after the EVGA CLC 280 ($130), which we reviewed last week against the NZXT X62 & Corsair H115i. The EVGA CLC 120 prices itself at $90, making it competitive with other RGB-illuminated coolers, but perhaps a bit steep in comparison to the cheaper 120mm AIOs on the market. Regardless, 120mm territory is where air coolers start to claw back their value in performance-to-dollar; EVGA’s chosen a tough market to debut a low-end cooler, despite the exceptionally strong positioning of their CLC 280 (as stated in our review).
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.