We recently covered Intel’s DC P4800X data center drive, with takes on the technology from two editors in video and article form. Those content pieces served as a technology overview for 3D Xpoint and Intel Optane (and should be referenced as primer material), but both indicated a distinct lack of any consumer-focused launch for the new half-memory, half-storage amalgam.

Today, we’re back to discuss Intel’s Optane Memory modules, which will ship April 24 in the form of M.2 sticks.

As Intel’s platform for 3D Xpoint (Micron also has one: QuantX), Optane will be deployed on standardized interfaces like PCI-e AICs, M.2, and eventually DIMM form factors. This means no special “Optane port,” so to speak, and should make adoption at least somewhat more likely. There’s still a challenging road ahead for Intel, of course, as Optane has big goals to somewhat unify memory and storage by creating a device with storage-like capacities and memory-like latencies. For more of a technology overview, check out Patrick Stone’s article on the DC P4800X.

Intel’s latest memory technology has big aspirations. It has the ability to one day unify the DRAM and non-volatile memory structure, but we’re not there yet. Today, we get the Data Center Optane SSD (the DC P4800X) as a responsive, high-endurance drive specifically targeted at big data users. This is not a consumer product, but the architecture will not change in any significant ways as Optane & 3D Xpoint move to consumer devices. This information is applicable across the user space.

Upon initial release, the DC P4800X drive will be a 375GB PCIe 3.0 x4 NVMe HHHL device costing $1520 without Intel’s software, and $1951 with the Intel Memory Drive Technology software package. Later in the lifecycle, we should see 750GB and 1.5TB versions. The Optane SSD is one of three Optane technologies that Intel is marketing: Optane DIMM (fits into a DDR4 slot), Optane SSD (fits into a PCIe 3.0 x4 slot or U.2 connector), and Optane Memory (fits into an M.2 slot).

Intel has enjoyed relatively unchallenged occupancy of the enthusiast CPU market for several years now. If you mark the FX-8350 as the last major play prior to subsequent refreshes (like the FX-8370), that marks the last major AMD CPU launch as 2012. Of course, later launches in the FX-9000 series and FX-8000 series updates have been made, but there has not been an architectural push since the Bulldozer/Piledriver/Steamroller series.

AMD Ryzen, then, has understandably generated an impregnable wall of excitement from the enthusiast community. This is AMD’s chance to recover a market it once dominated, back in the Athlon x64 days, and reestablish itself in a position that minimally targets parity in price to performance. That’s all AMD needs: Parity. Or close to it, anyway, while maintaining comparable pricing to Intel. With Intel’s stranglehold lasting as long as it has, builders are ready to support an alternative in the market. It’s nice to claim “best” on some charts, like AMD has done with Cinebench, but AMD doesn’t have to win: they have to tie. The momentum to shift is there.

Even RTG competitor nVidia will benefit from this upgrade cycle. That’s not something you hear a lot – nVidia wanting AMD to do well with a launch – but here, it makes sense. A dump of new systems into the ecosystem means everyone experiences revenue growth. People need to buy new GPUs, new cases, new coolers, and new RAM to accompany any moves to Ryzen. Misalignment of Vega and Ryzen make sense in the sense of not smothering one announcement with the other, but does mean that AMD is now rapidly moving toward Vega’s launch. Those R7 CPUs don’t necessarily fit best with an RX 480; it’s a fine card, just not something you stick with a $400-$500 CPU. Two major launches in short order, then, one of which potentially drives system refreshes.

AMD must feel the weight borne by Atlas at this moment.

In this ~11,000 word review of AMD’s Ryzen R7 1800X, we’ll look at FPS benchmarking, Premiere & Blender workloads, thermals and voltage, and logistical challenges. (Update: 1700 review here).

Between its visit to the White House and Intel’s annual Investor Day, we’ve collected a fair bit of news regarding Intel’s future.

Beginning with the former, Intel CEO Brian Krzanich elected to use the White House Oval Office as the backdrop for announcing Intel’s plans to bring Fab 42 online, with the intention of preparing the Fab for 7nm production. Based in Chandler, Arizona, Fab 42 was originally built between 2011 and 2013, but Intel shelved plans to finalize the fab in 2014. The rebirth of the Arizona-based factory will expectably facilitate up to 10,000 jobs and completion is projected in 3-4 years. Additionally, Intel is prepared to invest as much as $7 billion to up-fit the fab for their 7nm manufacturing process, although little is known about said process.

The first unlocked i3 CPU, upon its pre-release disclosure to GN, sounded like one of Intel’s most interesting moves for the Kaby Lake generation. Expanding overclocking down to a low/mid-tier SKU could eat away at low-end i5 CPUs, if done properly, and might mark a reprisal of the G3258’s brief era of adoration. The G3258 didn’t hold for long, but its overclocking prowess made the CPU an easy $60-$70 bargain pickup with a small window of high-performance gaming; granted, it did have issues in more multi-threaded games. The idea with the G3258 was to purchase the chip with a Z-series platform, then upgrade a year later with something higher-end.

The i3-7350K doesn’t quite lend itself to that same mindset, seeing as it’s ~$180 and leaves little room between neighboring i5 CPUs. This is something that you buy more permanently than those burner Pentium chips. The i3-7350K is also something that should absolutely only be purchased under the pretense of overclocking; this is not something that should be bought “just in case.” Do or do not – if you’re not overclocking, do not bother to consider a purchase. It’s not uncommon for non-overclockers to purchase K-SKU Core i7 CPUs, generally for desire of “having the best,” but the 7350K isn’t good enough on its own to purchase for that same reason. Without overclocking, it’s immediately a waste.

The question is whether overclocking makes the Intel i3-7350K worthwhile, and that’s what we’ll be exploring in this review’s set of benchmarks. We test Blender rendering, gaming FPS, thermals, and synthetics in today’s review.

For comparison, neighboring non-K Intel products would include the Intel i5-7500 (3.4GHz) for $205, the i3-7100 for $120, and Intel i3-7320 (4.1GHz) for $165. These sandwich the 7350K into a brutal price category, but overclocking might save the chip – we’ll find out shortly.

To catch everyone up, we’ve also already reviewed the Intel i7-7700K ($350) and Intel i5-7600K ($240), both of which can be found below:

The Kaby Lake i7-7700K launched to the usual review verdict for Intel CPUs: Not particularly worthwhile for owners of recent Intel i7 CPUs, but perhaps worth consideration for owners of Sandy Bridge and (maybe) Ivy Bridge. The CPU gave an extra 1.5-3% gaming performance over the i7-6700K and roughly ~+7% performance in render applications. The i5-7600K we’d suspect would be similar in its generational stepping, but it’s worth properly benchmarking.

Our i5-7600K ($240) review and benchmark includes CPUs dating back to the i5-2500K (including OC) and i5-3570K, though we’ve also got a similar amount of i7 CPUs on the bench. We’ve just finished re-benching some of our AMD CPUs for some near-future articles, too, but t hose won’t make it on today’s charts.

To use any processing product for six years is a remarkable feat. GPUs struggle to hang on for that amount of time. You’d be reducing graphics settings heavily after the second or third year, and likely considering an upgrade around the same time. Intel’s CPUs are different – they don’t change much, and we almost always recommend skipping at least one generation between upgrades (for the gaming audience, anyway). The 7700K increased temperatures substantially and didn’t increase performance in-step, making it a pointless upgrade for any owners of the i7-6700K or i7-4690K.

We did remark in the review that owners of the 2500K and 2600K may want to consider finally moving up to Kaby Lake, but if we think about that for a second, it almost seems ridiculous: Sandy Bridge is an architecture from 2011. The i5-2500K came out in 1Q11, making it about six years old as of 2017. That is some serious staying power. Intel shows gains less than 10% generationally with almost absolute certainty. We see double-digits jumps in Blender performance and some production workloads, but that is still not an occurrence with every architecture launch. With gaming, based on the 6700K to 7700K jump, you’re lucky to get more than 1.5-3% extra performance. That’s counting frametime performance, too.

AMD’s architectural jumps should be a different story, in theory, but that’s mostly because Zen is planted 5 years after the launch of the FX-8350. AMD did have subsequent clock-rate increases and various rebadges or power efficiency improvements (FX-8370, FX 8320E), but those weren’t really that exciting for existing owners of 8000-series CPUs. In that regard, it’s the same story as Intel. AMD’s Ryzen will certainly post large gains over AMD’s last architecture given the sizeable temporal gap between launches, but we still have no idea how the next iteration will scale. It could well be just as slow as Intel’s scaling, depending on what architectural and process walls AMD may run into.

That’s not really the point of this article, though; today, we’re looking at whether it’s finally time to upgrade the i5-2500K CPU. Owners of the i5-2500K did well to buy one, it turns out, because the only major desire to upgrade would likely stem from a want of more I/O options (like M.2, NVMe, and USB3.1 Gen2 support). Hard performance is finally becoming a reason to upgrade, as we’ll show, but we’d still rank changes to HSIO as the biggest driver in upgrade demand. In the time since 2011, PCIe Gen3 has proliferated across all relevant platforms, USB3.x ports have increased to double-digits on some boards, M.2 and NVMe have entered the field of SSDs, and SATA III is on its way out as a storage interface.

Optane is Intel’s latest memory technology. The long-term goal for Optane is for it to be used as a supplemental system memory, caching storage, and primary storage inside PCs. Intel claims that Optane is faster than Flash NAND, only slightly slower than DRAM, has higher endurance than NAND, and, due to its density, will be about half the cost of DRAM. The catch with all of these claims is that Intel has yet to release any concrete data on the product.

What we do know is that Lenovo announced that they will be using a 16GB M.2 Optane drive for caching in a couple of their new laptops during Q1 2017. Intel also announced that another 32GB caching drive should be available later in the year, something we’ve been looking into following CES 2017. This article will look into what Intel Optane actually is, how we think it works, and whether it's actually a viable device for the enthusiast market.

Intel’s i7-7700K Kaby Lake CPU follows-up on Skylake with a microarchitecture that is largely identical, but with key improvements to the process technology. Through what Intel has dubbed “14nm+,” the new process technology has heightened fins and widened the gate pitch, both serving as key contributors to the increased frequency headroom on the 7th Generation Intel Core CPUs. Other key changes, like enablement of finer-tuned frequency switching and AVX settings, theoretically offer better responsiveness to current demand on the CPU. As with most active frequency tuning, the idea is that there’s some power efficiency benefit that is coupled with better overall performance by way of reduced latency between changes.

Kaby Lake CPUs are capable of switching the clock speed at a 1000Hz rate (or once per millisecond), and though we’ve asked for the minimum frequency adjustment per change, we have not yet received a response. AMD recently made similar mentions of this sort of clock adjustment on Ryzen, using the upcoming Zen architecture. More on that later this week.

Today’s focus is on the Intel i7-7700K flagship Kaby Lake CPU, for which we’ve deployed the new MSI Z270 Gaming Pro Carbon ($165) and Gigabyte Z270 Gaming 7 ($240) motherboards. For this Intel i7-7700K review, we’ll be looking at thermal challenges, blender rendering performance, gaming performance, and synthetic applications. Among those, FireStrike, TimeSpy, and Cinebench are included.

gigabyte-aorus-z270x-g7-1  msi-z270-pro-carbon-1

The thermal results should be among the most interesting, for once, though we’ve also found Blender performance to be of noteworthy discussion.

Product availability should begin on January 5, with the official launch today (January 3) for the Intel 7th Gen Core CPU products. Note that some products will not be available until later, like the i3-7350K, which is expected for late January. The i7-7700K will be here once it's available.

There are more than 40 SKUs for the 7th Generation Kaby Lake CPUs, when counting Y-, H-, S-, and U-class CPUs. Starting with the specifications for the 7700K, 7600K, and 7350K CPUs (i7, i5, i3, respectively):

Our full OCAT content piece is still pending publication, as we ran into some blocking issues when working with AMD’s OCAT benchmarking utility. In speaking with the AMD team, those are being worked-out behind the scenes for this pre-release software, and are still being actively documented. For now, we decided to push a quick overview of OCAT, what it does, and how the tool will theoretically make it easier for all users to perform Dx12 & Vulkan benchmarks going forward. We’ll revisit with a performance and overhead analysis once the tool works out some of its bugs.

The basics, then: AMD has only built the interface and overlay here, and uses the existing, open source Intel+Microsoft amalgam of PresentMon to perform the hooking and performance interception. We’ve already been detailing PresentMon in our benchmarking methods for a few months now, using PresentMon monitoring low-level API performance and using Python and Perl scripts built by GN for data analysis. That’s the thing, though – PresentMon isn’t necessarily easy to understand, and our model of usage revolves entirely around command line. We’re using the preset commands established by the tool’s developers, then crunching data with spreadsheets and scripts. That’s not user-friendly for a casual audience.

Just to deploy the tool, Visual Studio package requirements and a rudimentary understanding of CMD – while not hard to figure out – mean that it’s not exactly fit to offer easy benchmarking for users. And even for technical media, an out-of-box PresentMon isn’t exactly the fastest tool to work with.

Page 1 of 10

  VigLink badge