The Intel i7-2600K is arguably one of the most iconic products released by Intel in the last decade, following-up the seminal Nehalem CPUs with major architectural improvements in Sandy Bridge. The 2600K was a soldered CPU with significant performance uplift over Nehalem 930s, and launched when AMD’s Phenom II X6 CPUs were already embattled with the previous Intel architecture. We revisited these CPUs last year, but wanted to come back around to the 2600K in 2018 to see if it’s finally time to upgrade for hangers-on to the once-king CPU.
Our original Intel i7-2600K revisit (2017) can be found here, but we had a completely different test methodology at the time. The data is not at all comparable.
The 2600K CPU was manufactured starting around 2009-2010, launching alongside the Intel Sandy Bridge 2nd Gen Core i-Series of CPUs. This launch followed Nehalem, which challenged the Phenom II X6’s appeal in a heated market. Sandy Bridge launched and has remained a well-respected, nearly idolized CPU since its launch. Intel made tremendous gains over Nehalem and hasn’t quite recaptured that level of per-core increase since. For everyone still on Sandy Bridge and the i7-2600K (or i7-2700K), we wanted to revisit the CPUs and benchmark them in 2018. These 2018 i7-2600K benchmarks compare against Ryzen (R7 2700 and others), the i7-8700K, and the i9-9900K, alongside several other CPUs. For anyone with a 2700K, it’s mostly the same thing, just 100MHz faster.
The AMD Athlon 200GE CPU enters our benchmarking charts today, but we’re reviewing this one with a twist: For this benchmark, we’re testing the CPU entirely as a CPU, ignoring its integrated graphics out of curiosity to see how the $55 part does when coupled with a discrete GPU. To further this, we overclocked this supposedly locked CPU to 3.9GHz using multiplier overclocking, which is disabled by AMD on most boards likely for product segmentation of future 200-series parts. In this instance, the 200GE at 3.9GHz posts significantly improved numbers over stock, making it a candidate to replace the retired price position once held by the Intel Pentium CPUs, at least, up until the 14nm shortage.
In the past, the Intel G3258 and successor G4560 stood as affordable options for ultra-budget builds that were still respectable at gaming tasks. The Pentium G5000 series – including the G5400 and G5600 (in this today’s benchmark) – has skyrocketed in price and dwindled in availability. The G5600 and G5400 alike are in the realm of $100, depending on when you check pricing, with the G5400 often ending up more expensive than the G5600. A lot of this is due to demand, but supply is also weak with the ongoing 14nm shortage. Intel is busy allocating that fab space to other products, minimizing the amount of Pentium G CPUs on market and allowing retailers control to boost prices and meet what demand will pay. This has left a large hole in the market of low-end CPU + low-end dGPU solutions, and that’s a hole which AMD may be able to fill with its Athlon 200GE solution, which had a launch MSRP of $55.
Unlike Ryzen proper chips, the 200GE includes an IGP (Vega graphics) that enables it as a fully standalone part once popped into a motherboard; however, we think its IGP is too weak for most of our normal testing, and we know it’d underperform versus the R3 2200G. The G4560-style market is one we like to look at, so we decided to test the 200GE as an ultra-budget replacement for coupling alongside a low-end dGPU, e.g. a GTX 1050 or RX 550/560. If the CPU holds up against our standardized test battery, it’ll work when coupled with a low-end GPU.
Amazon made news this past week, and it wasn't just for Black Friday: The company has been working on producing an ARM CPU named "Graviton," offering an AWS solution competing with existing AWS Intel and AMD offerings, but driving price down significantly lower. This has undoubtedly been among the biggest news items in the past week, although Intel's Arctic Sound murmerings, the GTX 1060 GDDR5X, and the FTC v. Loot Box fight all deserve attention. That last item is particularly interesting, and marks a landmark battle as the US Government looks to regulate game content that may border on gambling.
As always, show notes are below.
Today we’re reviewing the Intel i5-9600K CPU, a 6-core 8th-Generation refresh part that’s been badged as a 9000-series CPU. The 9600K is familiar to the 8600K, except soldered and boosted in frequency. The new part costs roughly $250 and runs at 3.7GHz base or 4.6GHz turbo, with an all-core closer to 4.3GHz, depending on turbo duration tables. When we last reviewed an i5 CPU, our conclusion was that the i7s made more sense for pure gaming builds, with the R5s undercutting Intel’s dominance in the mid-range. We’re revisiting the value proposition of Intel’s i5 lineup with the 9600K, having already reviewed the 9900K and, of course, the 8700K previously.
As a foreword, note that the R5 2600's current and maintaining price-point of $160 makes it a less direct comparison. The 2600X, which would perform about where an overclocked 2600 performs, is about $220. This is also cheaper, but still closer to compare. Even closer is the R7 2700, which is $250-$270, depending on sales. The 2700 maintains at about $270 when sales aren't active. The most fair comparison by price would be the 2700, then, not the by-name comparison with the R5 2600(X) CPUs.
AMD launched its RX 580 2048 silently in China a few months ago, and in doing so damaged its brand credibility by rebranding the RX 570 as an RX 580. The point of having those two, distinct names is that they represent different products. The RX 580 2048 has 2048 FPUs (or streaming processors), which happens to be exactly what the RX 570 has. The RX 580 is also a few MHz higher in clock, which is fully attainable with an overclocked RX 570. Working with GamersNexus contacts in Taiwan, who then worked with contacts in China, we managed to obtain this China-only product launch so we could take a closer look at why, exactly, AMD thinks an RX 570 Ti deserves the name “RX 580.”
Taking an existing product with a relatively good reputation and rebuilding it as a worse product isn’t new. Don’t get us wrong: The RX 570, which is what the RX 580 2048 is, is a reasonably good card, especially with its new prices of roughly $150 (Newegg) to $180 elsewhere. That said, an RX 580 2048 is, by definition, not an RX 580. That’s lying. It is an RX 570, or maybe an RX 575, if AMD thinks that a 40MHz clock difference deserves a new SKU. AMD is pulling the same deceitful trick that NVIDIA pulled with its GT 1030 DDR4 card. It’s disgraceful, misleading, and predatory of consumers who may otherwise not understand the significance of the suffix “2048.” If they’re looking for an RX 580, they’re still finding one – except it isn’t one, and to brand the RX 580 2048 as an RX 580 is disgraceful.
We have a separate video scheduled to hit our channel with a tear-down of the card, in case you’re curious about build quality. Today, we’re using the DATALAND RX 580 2048 as our vessel for testing AMD’s new GPU. Keep in mind that, for all our scorn toward the GPU, DATALAND is somewhat unfortunately the host. DATALAND didn’t make the GPU – they just put it on the PCB and under the cooler (which is actually not bad). It also appears that DATALAND (迪兰) works alongside TUL, the parent company to PowerColor.
We paid about $180 USD for this card, which puts it around where some RX 570s sell for (though others are available for ~$150). Keep in mind that pricing in China will be a bit higher than the US, on average.
Finding the “best" workstation GPU isn't as straight-forward as finding the best case, best gaming CPU, or best gaming GPU. While games typically scale reliably from one to the next, applications can deliver wildly varying performance. Those gains and losses could be chalked up to architecture, drivers, and also whether or not we're dealing with a true workstation GPU versus a gaming GPU trying to fill-in for workstation purposes.
In this content, we're going to be taking a look at current workstation GPU performance across a range of tests to figure out if there is such thing as a champion among them all. Or, in the very least, we'll figure out how AMD differs from NVIDIA, and how the gaming cards differ from the workstation counterparts. Part of this will look at Quadro vs. RTX or GTX cards, for instance, and WX vs. RX cards for workstation applications. We have GPU benchmarks for video editing (Adobe Premiere), 3D modeling and rendering (Blender, V-Ray, 3ds Max, Maya), AutoCAD, SolidWorks, Redshift, Octane Bench, and more.
Though NVIDIA's Quadro RTX lineup has been available for a few months, review samples have been slow to escape the grasp of NVIDIA, and if we had to guess why, it's likely due to the fact that few software solutions are available that can take advantage of the features right now. That excludes deep-learning tests which can benefit from the Tensor cores, but for optimizations derived from the RT core, we're still waiting. It seems likely that Chaos Group's V-Ray is going to be one of the first plugins to hit the market that will support NVIDIA's RTX, though Redshift, Octane, Arnold, Renderman, and many others have planned support.
The great thing for those planning to go with a gaming GPU for workstation use is that where rendering is concerned, the performance between gaming and workstation cards is going to be largely equivalent. Where performance can improve on workstation cards is with viewport performance optimizations; ultimately, the smoother the viewport, the less tedious it is to manipulate a scene.
Across all of the results ahead, you'll see that there are many angles to view workstation GPUs from, and that there isn't really such thing as a one-size-fits all - not like there is on the gaming side. There is such thing as an ultimate choice though, so if you're not afraid of spending substantially above the gaming equivalents for the best performance, there are models vying for your attention.
As we continue our awards shows for end of year (see also: Best Cases of 2018), we’re now recapping some of the best and worst CPU launches of the year. The categories include best overall value, most well-rounded, best hobbyist production, best budget gaming, most fun to overclock, and biggest disappointment. We’ll be walking through a year of testing data as we recap the most memorable products leading into Black Friday and holiday sales. As always, links to the products are provided below, alongside our article for a written recap. The video is embedded for the more visual audience.
We’ll be mailing out GN Award Crystals to the companies for their most important products for the year. The award crystal is a 3D laser-engraved GN tear-down logo with extreme attention to detail and, although the products have to earn the award, you can buy one for yourself at store.gamersnexus.net.
As a reminder here, data isn’t the focus today. We’re recapping coverage, so we’re pulling charts sparingly and as needed from a year’s worth of CPU reviews. For those older pieces, keep in mind that some of the tests are using older data. For full detail on any CPU in this video, you’ll want to check our original reviews. Keep in mind that the most recent review – that’ll be the 9600K or 9980XE review – will contain the most up-to-date test data with the most up-to-date Windows and game versions.
This content stars our viewers and readers. We charted the most popular video cards over the launch period for NVIDIA’s RTX devices, as we were curious if GTX or RTX gained the most sales in this time. We’ve also got some AMD data toward the end, but the focus here is on a shifting momentum between Pascal and Turing architectures and what the consumers want.
We’re looking exclusively at what our viewers and readers have purchased over the two-month launch window since RTX was announced. This samples several hundred purchases, but is in no way at all a representative sample of the whole. Keep in mind that we have a lot of sampling biases here, the primary of which is that it’s our audience – that means these are people who are more enthusiast-leaning, likely buy higher end, and probably follow at least some of our suggestions. You can’t extrapolate this data market-wide, but it is an interesting cross-section for our audience.
Although the year is winding down, hardware announcements are still heavy through the mid-point in November: NVIDIA pushed a major driver update and has done well to address BSOD issues, the company has added new suppliers to its memory list (a good thing), and RTX should start getting support once Windows updates roll-out. On the flip-side, AMD is pushing 7nm CPU and GPU discussion as high-end serve parts hit the market.
Show notes below the embedded video.
Hardware news coverage has largely followed the RTX 2080 Ti story over the past week, and it's one of dying cards of unknown quantities online. We have been investigating the issue further and have a few leads on what's going on, but are awaiting for some of the dead cards to arrive at our office before proceeding further. We also learned about Pascal stock depletion, something that's been a curious topic when given the slow uptake on RTX.
Further news items include industry discussion on Intel's outsourcing to TSMC, its hiring of former AMD graphics staff, and dealings with 14nm shortages. Only one rumor is present this week, and that's of the nearly confirmed RX 590.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.