German manufacturer be quiet! has launched an update to the Pure Power series of entry-level PSUs: the Pure Power 10 and Pure Power 10 CM models (CM for “Cable Management”). We previously covered the new Pure Power PSUs at CES last year, where it was revealed that the series would be moving to 80 Plus Silver certification (for models at and above 400W) and that the cables would be changed to solid black by popular demand. In speaking with be quiet! at that CES meeting, we also learned that silver-rated PSUs are rough listing on Newegg, since there are so few of them; folks sorting by 80 Plus rating often skip over Silver.

The 300 and 350W supplies have one PCIe connector, 400-500 have two, and 600W+ have four. Modular cables are low-profile and fixed cables are (black) sleeved.

RAM Prices on the Rise as Fabs Transition to 20nm

By Published February 01, 2017 at 8:22 pm

As predicted, DRAM-dependent components continue to grow more expensive as demand outpaces supply. Nanya Technology president Pei-Ing Lee confirmed that their DRAM’s average price will increase in the first and second quarter of 2017.

When we published our “Why Are RAM Prices So High” article in 2014, DRAM was transitioning to 25nm wafers—and now it’s transitioning again, this time to 20nm. Prices in the second half of 2017 are expected to stabilize, but depend largely on how quickly manufacturers gear up for the move to smaller dies—Nanya Technology will be simultaneously increasing 20nm production while cutting down on 30nm going into 2018.

EVGA CLC 280mm Review vs. NZXT X62, Corsair H115i

By Published February 01, 2017 at 9:00 am

EVGA’s closed-loop liquid cooler, named “Closed-Loop Liquid Cooler,” will begin shipping this month in 280mm and 120mm variants. We’ve fully benchmarked the new EVGA CLC 280mm versus NZXT’s Kraken X62 & Corsair’s H115iV2 280mm coolers, including temperature and noise testing. The EVGA CLC 280, like both of these primary competitors, is built atop Asetek’s Gen5 pump technology and primarily differentiates itself in the usual ways: Fan design and pump plate/LED design. We first discussed the new EVGA CLCs at CES last month (where we also detailed the new ICX coolers), including some early criticism of the software’s functionality, but EVGA made several improvements prior to our receipt of the review product.

The EVGA CLC 280 enters the market at $130 MSRP, partnered with the EVGA CLC 120 at $90 MSRP. For frame of reference, the competing-sized NZXT Kraken X62 is priced at ~$160, with the Corsair H115i priced at ~$120. Note that we also have A/B cowling tests toward the bottom for performance analysis of the unique fan design.

Relatedly, we would strongly recommend reading our Kraken X42, X52, & X62 review for further background on the competition. 

SK Hynix has been busy as of late. We most recently covered their plans for expansion, which offered a cursory foretaste into what 2017 might hold for the semiconductor supplier. SK Hynix has also recently further delineated plans for 2017, trailing behind their still-fresh announcement of the industry’s first 8GB LPDDR4X-4266 DRAM packages aimed at next-generation mobile devices.

In revealing plans, SK Hynix intends to volumize production of new types of memory—not altogether unexpected. Their primary focus on NAND production and expansion over DRAM is most noteworthy, at least for impermanent future. As such, SK Hynix intends to start volume production of 72-layer 3D TLC NAND (3D-V4). For reference, SK Hynix’s 36-layer and 48-layer NAND were 3D-V2 and 3D-V3, respectively. Notable about SK Hynix’s fourth version of 3D NAND is that it will use block sizes of 13.5 MB over the 9 MB sizes of the second and third generation predecessors. Furthermore, SK Hynix intends to roll-out 256 Gb 3D TLC ICs by Q2 2017, with 512 Gb 3D TLC ICs coming in Q4 2017. SK Hynix’s new 72-layer 3D NAND should allow for higher capacity SSDs in smaller form factors and increase performance on a per IC basis.

New PAX Unplugged Announced for November

By Published January 30, 2017 at 6:09 pm

PAX really couldn’t stop growing if it tried, at this point. In 2016, there were five PAX events: PAX East, West, South, Australia, and Dev. But this really wasn’t enough for Gabe and Tycho, plus they thought that there was a little too much free time in November. But the downside to all the other PAX events is the massive power bill, and so to fix this problem the new event is like Nirvana in ‘94 -- Unplugged.

While some of that isn’t entirely true, what is true is that the popularity of board and card games is on the rise, a fact to which attendees of PAX West can attest. Every year, the designated unplugged gaming areas get more and more difficult to navigate and Magic: the Gathering’s displays get more elaborate. From the 17th to the 19th of November in Philadelphia, PAX will bring together those gamers who don’t need cables to get their kicks.

One interesting aspect of the Watch_Dogs 2 benchmarking we did for our 2500K revisit was the difference in performance between i5s and i7s. At stock speeds, the i7-2600K was easily outpacing the i5-2500K by roughly 15 FPS—and even more interestingly, the i7-6700K managed to hit our GTX 1080’s ceiling of 110-115 FPS, while the i5-6600K only managed 78.7 with the same settings. Watch_Dogs 2 is clearly a game where the additional threads are beneficial, making it an exciting test opportunity as that’s not a common occurrence. We decided to look into settings optimization for CPUs with Watch Dogs 2, and have tested a few of the most obvious graphics settings to see which ones can really help.

This Watch Dogs 2 graphics optimization guide focuses on CPU performance to try and figure out which settings can be increased (with GPU overhead) and decreased (with CPU limits).

Obsidian Entertainment has been more than just a little busy, and with good reason. In 2015, the company released Pillars of Eternity, the isometric RPG that harkened back to classic Black Isle games such as Icewind Dale, Baldur’s Gate, and Planescape: Torment. Pillars proved to be both critically and commercially successful. In addition to a two-part expansion for that game, Obsidian released another isometric RPG just over a year later, the evil-sided “Tyranny” that showed how the other half lives.

It’s barely been three months since Tyranny launched and Obsidian are already fit and ready for their next game. On 26th, Obsidian launched a Fig crowd-funding campaign for Pillars of Eternity II: Deadfire. In under 24 hours, Obsidian met their goal of $1.1 million.

DRR4 has seen some price increases as of late. This could be due in part to Intel’s recent release of Kaby Lake and AMD’s soon to be released Ryzen, both of which require DRR4, but shortages in supply are not uncommon (especially with SMT lines ramping-up for SSDs). Graphics card prices across both nVidia and AMD have finally come to a stabilization point, and some savings can be had on a few GPUs from both vendors.

Before even getting started here, let’s put out the obvious disclaimer. This GPU benchmark is for the beta version of For Honor, which means a few things: (1) the game’s not final yet and, despite being just two weeks away, there are still some graphics settings missing from the menu; (2) nVidia’s current drivers are optimized for the beta, but the company plans another update some point soon for further optimizations; (3) AMD has not yet released drivers for the game, though we did ask for early access and were told that the company won’t be ready until launch day. There are day-0 drivers planned from AMD.

Regardless, we tested anyway to see how the beta performs and get a baseline understanding of what we should expect overall from the new multiplayer brawler title. For Honor thus far has proven impressively detailed in geometry and texturing (especially texturing), and deserves high marks for the art department. Granted, that generally means more abuse on the video card or CPU (for the complex geometric draw calls), so we’ve got some For Honor graphics settings scaling tests as well.

This graphics card benchmark tests For Honor’s performance at 4K, 1440p, and 1080p with Extreme settings. We tested using a real, in-game benchmark rather than the built-in benchmark, which generally makes performance look a lot worse than it is in reality (we have a chart demonstrating this). Settings scaling was tested from low to extreme, as was multiplayer and ‘singleplayer’ (bot match). We primarily ran For Honor benchmarks with the AMD RX 480 8GB & 4GB, RX 470 4GB, RX 460 2GB, & 390X cards vs. the GTX 1080, 1070, 1060 6GB & 3GB, 1050 & Ti, and 970 AIB partner cards.

Every now and then, a new marketing gimmick comes along that feels a little untested. MSI’s latest M.2 heat shield always struck us as high on the list of potentially untested marketing claims. The idea that the “shield” can perform two opposing functions – shielding an SSD from external heat while somehow simultaneously sinking heat from within – seems like it’s written by marketing, not by engineering.

From a “shielding” standpoint, it might make sense; if you’ve got a second video card socketed above the M.2 SSD and dumping heat onto it, a shield could in fact help keep heat from touching SMT components. This would include Flash modules and controllers that may otherwise be in a direct heat path. From a heat sinking standpoint, a separate M.2 heatsink would also make sense. M.2 SSDs are notoriously hot resultant of their lower surface area and general lack of housing (ignoring the M8Pe and similar devices), and running high temperatures in a case with unfavorable ambient will result in throttled performance. MSI thought that adding this “shield” to the M.2 slot would solve the issue of hot M.2 SSDs, but it’s got a few problems that don’t even require testing to understand: (1) the “shield” (or sink, whatever) doesn’t enshroud the underside of the M.2 device, where SMDs will likely be present; (2) the cover is designed more like a shield than a sink (despite MSI’s marketing language – see below), and that means we’ve got limited surface area with zero dissipation potential.

  VigLink badge