We’re comparing prices based on the CPU alone, but motherboards are an important hidden cost associated with CPUs. That isn’t the focus of this piece, but of the chips listed above, boards compatible with the 1920X are generally the most expensive. Brand new socket TR4 motherboards are all $200+ on Newegg, while AM4 and Coffee Lake-compatible LGA1151 boards are both cheaper and more common on the used market. New and used X79 boards are extremely overpriced, but Xeon-compatible boards with other chipsets tacked on are very cheap from Chinese sellers.
CPU Test Methodology
Our CPU testing methodology is split into two types of benchmarks: Games and workstation workloads, but every CPU which is sufficiently high-end will go through both sets of tests. We are beginning to spend more effort publicly documenting the exact versions of our tests, hoping that this is helpful to those reading our tests. We are also detailing more explicitly the unit of measurement in text, although our charts typically do this as well. Our workstation benchmarks include the following tests:
- 7-ZIP Compression benchmark (version 1806 x64). Unit of measurement: MIPS (millions of instructions per second; higher is better)
- 7-ZIP Decompression benchmark (version 1806 x64). Unit of measurement: MIPS (millions of instructions per second; higher is better)
- 7-ZIP dictionary size is 2^22, 2^23, 2^24, and 2^25 bytes, 4 passes and then averaged. Thread count equals the CPU thread count.
- Blender 2.79 GN Logo render (frame from GN intro animation, heavy on ray-tracing). Unit of measurement: Render time in minutes (lower is better)
- Blender 2.79 GN Monkey Heads render (CPU-targeted workload with mixed assets, transparencies, and effects). Unit of measurement: Render time in minutes (lower is better).
- GNU Compiler Collection (GCC) version 7.4.0, compiling 8.2.0 on Windows 10. Unit of measurement: Render time in minutes (lower is better). Run with Cygwin environment.
- Chaos Group V-Ray CPU Benchmark (1.0.8). Unit of measurement: Render time in minutes (lower is better)
- Cinebench R15 (used for internal validation). Unit of measurement: CB Marks (higher is better)
- TimeSpy Physics. Unit of measurement: 3DMark points & FPS (higher is better)
- Adobe Photoshop CC 2019 (Puget 18.10). Unit of measurement: Average score (higher is better)
- Adobe Premiere & AME CC 2019 (GN test suite: 1080p60 convention shot; H.264, 35Mbps, 5.2, High profile, AAC+Version 2, Audio 256K). Unit of measurement: Render time in AME (lower is better). CUDA enabled.
- Adobe Premiere & AME CC 2019 (GN test suite: 4K60 aroll+broll; H.264, 35Mbps, 5.2, High profile, AAC+Version 2, Audio 256K). Unit of measurement: Render time in AME (lower is better). CUDA enabled.
- Adobe Premiere & AME CC 2019 (GN test suite: 4K60 charts; H.264, 35Mbps, 5.2, High profile, AAC+Version 2, Audio 256K). Unit of measurement: Render time in AME (lower is better). CUDA enabled.
All tests are conducted multiple times for parity and then averaged, with outliers closely and manually inspected. The number of times tested depends on the application and its completion time. We use an internal peer review process where one technician runs tests, then the other reviews the results (applying basic logic) to ensure everything looks accurate. Any stand-out results are reported back to the test technician and rerun after investigation. Error margins are also defined in our chart bars to help illustrate the limitations of statistical relevance when analyzing result differences. These are determined by taking thousands of test results per benchmark and determining standard deviation for each individual test and product. Any product that has significant excursions from the mean deviation will be highlighted in its respective review.
GN CPU Test Bench 2019
|CPU||This is what we're testing!||GN, Intel, & AMD|
|Motherboard||See article, changes per CPU||Various|
|RAM||GSkill Trident Z 4x8GB 3200 CL14||GamersNexus|
|Video Card||EVGA RTX 2080 Ti XC Ultra||EVGA|
|PSU||EVGA SuperNOVA T2 1600W||EVGA|
|CPU Cooler||NZXT Kraken X62 280mm||NZXT|
|SSD||Samsung 860 EVO 250GB||GN|
|Project/Game SSD||Samsung 860 PRO 1TB||GN|
Motherboards used are varied based upon platform. Where compatible, we used the following:
- Gigabyte X570 Master FC5
- ASUS Maximus XI Hero Z390
- ASUS Crosshair VII Hero X470
Driver version 430.86 is used. Adaptive sync is not used in testing.
MCE is always disabled on test platforms, ensuring that turbo boost durations should be running within specifications set by the CPU manufacturer. We also try to keep an eye out for other motherboard trickery, like MSI’s oft-boosted BCLK, and then reset to stock settings when applicable. XMP is used on the Corsair memory in our test benches.
Production benchmarks are probably the most important here, so we’ll start there and get into the gaming benchmarks later. Production tests were what the Threadripper series initially performed best with, like tile-based renderer Blender, ray-tracing renderers like V-Ray, or compression and decompression workloads.
We’ll start with Blender and the GN in-house monkey head render, our long-standing stress test that leverages various graphics techniques to stress the CPU. For Blender, the goal is to render the scene as fast as possible. Blender is animation software, so each frame is only a fraction of a second of what would be played back. Every second counts here, but our stress test runs long to really illustrate the importance of higher-end devices for render workloads.
For this one, the AMD Threadripper 1920X manages to finish in 16.8 minutes, which puts it about tied with an 8-core 9900K at 5.2GHz. Compared to a 12-core CPU from 2013, the Intel E5-2697v2, performance is significantly improved. We mention the 2697 v2 because it’s available in abundance when buying used, and is still selling for about $200, or about the same price as a 1920X. The 2697 v2 demonstrates one of the clear downsides of buying older, used hardware, and that’s the lack of newer instruction sets that accelerate processes, not to mention the slimmed-down I/O options. The 1920X outperforms the 2697 v2 with a render time decrease of about 32%, or about 8 minutes for one frame. That puts the 2697 v2 at rough equivalence with another $200 part -- the stock AMD R5 3600 from this year. The $200 1920X is still outperforming the $200 3600 for Blender, but there’s a lot more to get through. For perspective, the modern AMD R9 3900X finishes its render in about 12.8 minutes, but has the same core and thread count as the 1920X. It’s significantly more expensive, but completes the render in 23.8% less time.
For our GN Logo render, we get a real-world look at a Blender animation we made for our own video intros. There are 120 frames to this animation, so if you take each of these render times and multiply it against 120, you’d get the full scope of how much render time matters. We used something like 5 GPUs to render this faster, but single CPUs are tested here.
This one is a little more intense and uses a lot of ray tracing. Instruction sets can come into play here. The 1920X finishes this render in about 20.3 minutes, putting it about 4 minutes slower than the 1950X and 3900X alike, the two of which are equivalent in this test. The $200 E5-2697 v2 takes 35 minutes to complete the same render, showing its age in a serious way, while the R5 3600 at $200 finishes the same job in 31 minutes. Even though cores are among the most important in Blender, that’s clearly not all that matters. Frequency and instruction compatibility also play a big role.
7-ZIP is next for testing both decompression. We measure in millions of instructions per second for this test, so higher is better.
Starting with decompression, the 3900X and 1950X are similar in performance, but they’re also close in price. We’d probably favor the 3900X on average, unless a 2000-series upgrade is in the future. The 1920X is what we’re really here for, and that one -- despite equivalent core count with the 3900X -- allows the 3900X stock CPU a lead of 21%. This lead is partially attributable to clock speed and to cache. There’s less L1 on the 3900X than the 1920X, but 2x as much L3, at 64MB to 32MB. The 3900X is also significantly faster at 3.8GHz base versus 3.5GHz base, not to mention boost clock deltas. The other 12-core here, the 2697 v2, ends up at about 68K MIPS, or roughly tied with an R7 1700 and a bit behind the AMD R5 3600 -- also $200 presently. For this workload, the 2697 needs to lose more value on the used market (which also benefits the easily accessible dual-socket version of the 2697). The 1920X actually does well here at $200. The 1950X is too close to the 3900X, but the 1920X at $200 may be worth considering for a cheap solution. X399 is becoming a dead platform, so keep that in mind too.
Adobe Premiere is next. This one surprised us when the 3900X managed to outpace the 9900K for an AMD vs. Intel first in Premiere. Starting first with our 1080p show floor report example, the 1920X manages to complete the render in about 4.1 minutes, with the $200 R5 3600 behind at 4.8 minutes, allowing the 1920X a time reduction of 15%. The 2697 v2 takes 5.5 minutes, suffering from its frequency more than anything and demonstrating limited value in this scenario. The R9 3900X stock CPU completes the render in 3.4 minutes, allowing a reduction of 17% versus the 1920X, but costing significantly more at present. For comparison, the R7 2700X -- currently about $195 -- is roughly tied with the 1920X when overclocked.
The 4K60 render takes longer and slightly modifies the stack, but not much. The 3900X leads with a 9-minute render time. Although Intel HEDT isn’t yet represented, it’s doing better than the 9900K. The main point remains the 1920X, where we see a render time of 11 minutes, allowing the 3900X a 19% lead for about 2x the price. The 1920X again places itself in a position where it may be worth considering for a machine dedicated to this task, but mostly if you already have a motherboard or other components ready for use. It certainly shuffles the used Xeon market -- or it should, anyway -- because the $200 E5-2697 v2 has no value in this workload.
V-Ray by Chaos Group is next. This is another rendering application, but focused on ray tracing specifically. The V-Ray test positions the 1920X at a completion time of 54 seconds, with the 1950X at 45 seconds and tying the 3900X. The 1920X and 3900X have core parity, but deltas in clocks allow the 3900X a render time reduction of 17%. Not bad value for the 1920X. The 9900K and 3700X aren’t too distant from the 1920X, but its value at the new $200 price-point makes it a worthy consideration. The 3600, also $200, and the i5-8400, nearing $200, are both bad enough at this test to not even be considered for someone focusing on using applications like this. They’d be more worth considering in our later gaming tests, but get killed in some production workloads when compared to the 1920X. That said, the 3600 is still a fantastic launching point to production tasks, but at price parity, the 1920X takes the win in this test.
Adobe Photoshop is next, using a test suite that sums-up various filtration, warp, translation, scale, and photo effects in time requirements, then converts the results into a higher-is-better scoring.
Photoshop has a known favor shown toward frequency, and in a big way. This is most easily demonstrated by pointing at the 9900K at 5.1GHz versus the 9700K at 5.1GHz, where both CPUs are within a few points of each other, despite a halving of threads for the 9700K. It should be no surprise, then that the 1920X struggles to prove value in this workload. At 811 points, the 1920X is close to the R5 2600, the R7 1700 when overclocked, and worse than the $200 i5-8400. The 1950X does worse than the 1920X here because of its base clock deficit, and the E5-2697 v2 isn’t even close to being worthwhile for this workload. If you’re a heavy Photoshop user and that’s most of what you do, skip the 1920X. Even at $200, an R5 3600 would be significantly better -- that one’s hitting 957 points, a lead of about 18% over the 1920X.
The utility of a 12C/24T CPU in gaming is still questionable, even as Intel and AMD both are pushing high core count CPUs in their main desktop lines. It’s up to game developers to code software that (at the bare minimum) doesn’t perform worse with 8+ threads, and they’re getting there slowly. The 1920X may have an advantage here in that AMD’s Ryzen Master can activate Game Mode on all Threadripper parts, which disables some cores in a way that should optimize for gaming performance. It’s a pain in the ass since it requires rebooting every time it’s enabled or disabled, but it’s a little easier from a user perspective than going into BIOS and manually toggling features.
We didn’t test overclocking on Threadripper parts this time around because it’s only become more pointless. Even on first gen Threadripper CPUs, overclocking really only offered an advantage in heavy all-core workloads, with the achievable all-core OC generally falling short of the stock single-core boost. The 8400 and the 2697 v2 are locked parts, leaving the 3600, which we managed to get up to 4.3GHz all-core.
Starting with the Civilization VI benchmark, the 1920X scored an average turn time of 38.5 seconds, or slightly worse at 41.7 seconds with Game Mode enabled. Game mode disables half the cores but leaves SMT enabled, making it a 6C/12T part, essentially a 1600X. The R5 3600 reduced average turn time by 6.5% stock versus stock at 36 seconds flat, with slight improvements from disabling SMT (35.2 seconds) or overclocking to 4.3GHz (35.3 seconds). The i5-8400 was also slightly faster than the stock 1920X at 37.5 seconds, while the old E5-2697 v2 is far and away the worst performer at 47.2 seconds average. Because of its age, the Xeon uses DDR3 2400MHz memory downclocked to 1866MHz rather than the DDR4 3200MHz memory used with the rest of our CPUs.
GTA V at 1080p had the 1920X at an average of 82.4FPS, largely unchanged by enabling Gam,e Mode other than worse 1% and .1% lows. The R5 3600 has a strong 26.6% advantage here at 106.8FPS average, unaffected by overclocking but improved further by disabling SMT for a best-case average of 108.9FPS. Since having 12 threads instead of 6 clearly isn’t an advantage here, the 6C/6T i5-8400 also put up a good fight with an average of 102.6FPS. The old Xeon with no overclocking capability and without the advantage of disabled cores chugged along at just 65.5FPS average, with lows dipping noticeably below 60FPS. At 1440p the story is the same, with averages for all four CPUs remaining almost unchanged. None of them approach a CPU bottleneck in GTA V with our current settings.
F1 is a game that runs at extremely high framerates even on the oldest and slowest CPUs we test, but it’s still useful for comparative benchmarking. At 1080p the 1920X ran at 204.4FPS average, with a very slight improvement up to 207.6FPS from enabling game mode, but with worse 1% lows. The 3600 scored much higher at about 272FPS average whether it ran stock, overclocked, or with SMT disabled, and the 8400 also did fairly well at 261.6FPS average. That’s 28% beyond the 1920X, and it’s not even the 9400. The E5-2697 v2 scored 188.4FPS average, which would be great in any test except this one, and had .1% lows that dipped below 60FPS.
At 1440p this title was affected by GPU bottlenecking with the 8400 and the 3600, but the Xeon and the 1920X scored almost as well as they did at 1080. Even with the GPU limitation, the 3600 outscored the 1920X by 16.6% here.
Hitman 2 is the first of two DX12 titles that we test currently. The Threadripper 1920X averaged 100.2 FPS, with an especially bad reaction to game mode bringing it down to 86.1 FPS average. The 8400 still technically averaged higher than the 1920X, but it’s actually almost tied for once, with an average of 101.1 FPS. 1% and .1% lows are abysmal for every CPU in this test, but the averages are still comparable. The 3600 averaged 115 FPS or 116.2FPS with the 4.3GHz overclock, and performed significantly worse with SMT disabled this time at 106.6FPS. The low-frequency 2697 v2 continued its streak of awful scores with a 76.6 FPS average. Hitman 2 exhibits almost exactly the same performance at 1440p as at 1080p with all of these CPUs--that’s one of the reasons we like it as a benchmark.
SHADOW OF THE TOMB RAIDER
Shadow of the Tomb Raider is the second of the DX12 games, but the 1920X averaged 105.2FPS here with no reaction to Game Mode other than a dip in 1% and .1% lows. The 8400 performed much better again, with a 31.7% advantage in average FPS up to 138.5 average. The 3600 also performed well at 137.6 FPS average, with a slight bump from overclocking but a more dramatic decrease down to 131.6FPS from disabling SMT. Perhaps Tomb Raider’s relatively positive reaction to high thread counts is why the 2697 v2 averaged 100.2 FPS, still the worst of our four $200 CPUs but not so far behind the 1920X this time.
ASSASSIN’S CREED: ORIGINS
In Assassin’s Creed: Origins the 1920X averaged a respectable 112.4 FPS. Finally we’ve come to a game where it outperforms the 6C/6T i5-8400, which averaged 103.5 FPS. Even the R5 3600 didn’t do much better than the 1920X, with a 114.5 FPS average stock and 116.2 FPS average with the 4.3GHz overclock. Disabling SMT hurt performance, but unfortunately that didn’t mean a good score for the 2697 v2, which only managed 79 FPS. The stack remained the same at 1440p, but compressed slightly by GPU limitations.
TOTAL WARHAMMER (Battle)
The Battle benchmark is the less CPU-constrained of our two Total War: Warhammer 2 benchmarks, but there’s still scaling. The 1920X averaged 119.7 FPS, but improved slightly to 124.5 FPS average with Game Mode enabled. This is not a game that responds well to more threads. The i5-8400 doesn’t need to worry about that and therefore averaged a healthy 156.8 FPS, and the R5 3600 just above it at 158.9 FPS average. Disabling SMT pushed it even higher, up to 166.4 FPS. The 2697 v2 did extremely poorly here, just managing 90.6 FPS average--it’s one of only a few CPUs on the chart so far that have averaged less than 100FPS in this benchmark.
TOTAL WARHAMMER (Campaign)
The campaign benchmark requires less GPU resources and is therefore more CPU-dependent The 1920X repeated its positive scaling with Game Mode from the Battle benchmark, rising from 110.5 FPS average to 124.6. The i5-8400 has less of an advantage over the game-mode 1920X than it did in the Battle benchmark at 147.6FPS average, but that’s still an 18.5% uplift. The 3600 averaged 155.2 FPS, but again did best when SMT was turned off, rising to 165.1 FPS. The 2697 v2 continues to plod along under the 100FPS threshold at 96.6 FPS average.
Conclusion: Is the 1920X Worth $200 in 2019?
This has been a pretty casual comparison of the $200 parts we have in-house, but there are some other things to take into account before seriously deciding whether the 1920X is worth it. First, as we mentioned earlier, TR4 is an uncommon socket type and the motherboards will only get rarer and more expensive. That’s because, to our present knowledge, TR4 isn’t going to be used in the immediate future, and future non-TR4 Threadripper motherboards won’t support the 1920X. Investing in the 1920X now narrowly limits the potential upgrade path, assuming that information is true.
The E5-2697 v2 is similarly limited, with the added downside that it’s a DDR3 platform with an old featureset, but compatible motherboards are cheap on Aliexpress and it’s a tempting purchase for tinkering around with. Part of the price of the 2697 is due to the fact that it’s dual-socket capable, so there are much better deals to be had on lower tier single-socket Xeons. It’s a novelty at this point, but it’s also not the worst purchase for rendering and multithreaded work on a budget (as long as the instruction set isn’t too outdated for you).
The i5-8400 and the i5-9400 are artificially limited CPUs without hyperthreading and without the ability to overclock. That’s nothing new, but it doesn’t make it any less annoying. The 8400 was fine for gaming back when we first reviewed it with only first-gen Ryzen to compare against, but the 9400 is just a refresh with a 100MHz bump and the same price as the 8400 was at launch. MSRP is lower, but still. Now that AMD is selling a higher performing, unlocked CPU that allows SMT for the same price, the i5s are a much tougher sell.
Which brings us to the R5 3600. It’s a modern CPU, it works with a vast array of AM4 motherboards, and it costs the same as the 1920X. The question “is the 1920X worth it” really boils down to whether a 1920X is better than an R5 3600, and it isn’t except in rendering and other thread-heavy workloads, like tile-based rendering. For anyone who wants to game at all, the 3600 is a better deal and a better CPU in general. Even someone low on cash that only needs to render Blender files will have to add on the cost of a TR4 motherboard, and that’s quickly outstripped with an R9 3900X and B450 or X470 motherboard.
Editorial, Testing: Patrick Lathan
Host, Test Lead: Steve Burke
Video: Josh Svoboda, Andrew Coleman