CPU: Threadripper 1950X
CPU (AMD Threadripper 1950X): The first part is the 1950X. We originally were going to do this with the Threadripper 1920X, but the $200 price drop on Amazon placed the 1950X right around where the 1920X originally was selling – so it didn’t increase our budget target, but got us a better CPU for our intended render tasks. For Blender workloads on the CPU, all that really matters is threads. Frequency helps, as always, but high thread-count is our first priority. The 1950X solves this. It’s also power efficient, which is a plus, and is pretty easy to cool.
At $800, the 1950X is an especially good deal for a high-end production PC. This isn’t something that’s built for games, of course, but that’s not why you buy a 32-thread processor.
(Above: Legacy 2.78 benchmark)
In terms of benchmarks, we previously tested the 1950X in Blender, finding the performance results that are above. Not surprisingly, outside of the 7980XE at $2100, the 1950X is among the chart-topping CPUs. The high thread-count is particularly useful for rendering simple orthographic images, where high-speed rendering is enabled by having more tiles simultaneously in flight.
Alternative: Threadripper 1920X
Video Card: ASUS GTX 1080 Ti Strix (& multi-GPU plans)
GPU (ASUS Strix 1080 Ti): We’re using the ASUS GTX 1080 Ti Strix card for the build, which just won our Best Overall award in our GTX 1080 Ti awards. To briefly recap, the Strix version of the 1080 Tis has the best noise-to-thermal performance that we’ve found on an air-cooled card this year, and manages to maintain proportionally low MOSFET and VRAM temperatures to the GPU core.
Even with a noise-normalized output of 40dBA, the Strix card was only ever beaten by liquid-cooled cards, and the Strix requires less overall real-estate internally while still maintaining competitive VRM temperatures.
This means higher clocks, as Pascal does drop clocks above 60C, and it also means that we can keep the operating noise levels a bit lower on the build.
We’re planning to add another card to this temporary build, then run two workers for CPU+GPU rendering simultaneously.
Motherboard: Gigabyte X399 Designare
Motherboard (Gigabyte X399 Designare): We’re using the Gigabyte X399 Designare motherboard for this build. This is the first time we’ve used the Gigabyte board for Threadripper; up until now, it’s all been done on the $500 ASUS Zenith Extreme. Gigabyte’s board comes in $100 cheaper, but sticks with an 8-phase VRM that uses 50A IR3556 power stages with a familiar IR35201 voltage controller. Gigabyte is using a 3+2-phase memory VRM with 40A power stages for DDR4 voltage.
The heatsink is somewhat finned – way better than most the other artsy heatsinks that are out right now, so we can give major credit for that. It’s not the ideal fin density, but better than a fat block of aluminum. The airflow afforded by the top-mounted CLC amply cools the VRM on this board.
Ignoring the more obvious VRM features, we also like the insane amount of fan headers that Gigabyte put on this board, and we did put them to use. There are seven total fan headers, all 4-pin PWM, and all seven are on the border of the motherboard. This means no routing cables hidden behind video cards or crammed between EPS12V headers, which is a small, insignificant, but otherwise excellently thought-out design.
The Gigabyte BIOS needs a serious improvement, but it’s mostly navigable. We’d like to see Gigabyte add more options at a top-level, rather than bury everything in sub-levels, and we’d also like to see them move away from the slide-in menus on the bottom and sides.
RAM: GSkill 32GB 3200MHz Trident Z RGB
RAM (GSkill 32GB 3200MHz Trident Z RGB): Memory’s rough right now. We want quad-channel for this one, and we need at least 32GB for the projects. Unfortunately, there’s no good way to get around spending $400 or more on a 32GB kit of memory from a retailer. We’re using stuff we already have, of course, but for a new build, you’re really getting screwed on the memory. Rather than thinking of the $200 discount on the 1950X as savings, think of it more as your extra budget for memory. That’ll help neutralize the current overpriced nature of DRAM. 3200MHz is enough for what we’re doing, and going beyond that will largely net diminishing returns given the tremendous price increases.
PSU: Seasonic Prime 1000W Platinum
PSU (SeaSonic 1000W Prime Platinum): OK, this is admittedly where we went a bit high-end on the system, but we’ve recently started developing a serious appreciation for high-end power supplies. In our own production machines, high-end PSUs have saved us from short-circuits, over-current, power surges, and run long up-times under abusive loads. For anyone who needs reliability, it is actually worth the investment. Sure, you can drop down to an $80-$100 PSU and be fine, yes, we know. But this is what we wanted to use specifically for a high-end production machine. Seasonic’s Prime series has good options between 80 Plus Gold and Titanium, depending on how crazy overkill you want to go, and has options in the 850W to 1000W range that we’d recommend. 850W would be enough for the 1950X and the 1080 Ti, but if you’re preparing for multi-GPU rendering configurations, you’ll want a higher-wattage PSU to accommodate multiple cards. We’ll let you make that call. This will particularly hinge on whether you’re using the 1950X and 1080 Tis simultaneously, or whether one is doing most of the work.
For power consumption and using the Platinum-class Seasonic unit, which is going to be more efficient, but perhaps needlessly so when compared to the acceptable efficiency of Gold, we found these numbers for power.
With this build in high performance mode and with stock clocks, we’re drawing 115W idle, 272W with a multithreaded Cinebench workload, 128W single-threaded, and 274W with a single CPU instance of Blender. Spinning-off two Blender instances, one on the GPU and one on the CPU, gets us to 425W. We’ve done things like this in the past when we needed things rendered as quickly as possible, but couldn’t fit more cards in the machine. We would set the CPU to render the first quarter of the frames, then let the GPUs handle the rest.
At 425W for the peak load, we still have more room to work with for the overclock. We later overclocked the CPU to 4GHz all-core at 1.35V, then set the GPU to a +75MHz offset and +500MHz core, with the power target maxed. That put us at 580W for the dual CPU+GPU render, increased us by 40W for the gaming test, and increased us by 130W for Cinebench. Keep in mind that a 4GHz overclock on the 1950X will sometimes deliver worse performance, like in gaming, because games can leverage XFR that goes beyond 4GHz. With 4GHz all-core, we’ll have worse single- and quad-thread performance, but better all-thread performance.
Case: Thermaltake View 71
Case (Thermaltake View 71): We chose the Thermaltake View 71 for the enclosure. We’ve worked with a few full towers this year, like the Be Quiet! Dark Base Pro 900 – White Edition, but we liked the View 71 for this build. The case is surprisingly well ventilated, thanks to large gaps around all the panels, and accommodates our Enermax Liqtech 360mm radiator well.
We relocated some things in this case. Although it has mostly sufficient airflow for the setup we have, particularly with the liquid cooler, we ultimately decided to relocate the rear 140mm Riing fan to the front, then install an average blackout NZXT fan in the back. This was for three primary purposes: One, it looks better, so that’s easy. Two, we wanted to feed more air straight into the GPUs, as we are planning to add a couple more cards to this build to accomplish some renders. Three, we’ve got three exhaust fans setup with the Liqtech unit, so we needed to make sure that was neither suffocating nor stealing all the air.
It’s a damn heavy build, thanks to all that tempered glass, but this thing shouldn’t be moving once it’s in-place.
Cooler: Enermax Liqtech TR4 360
CPU Cooler (Enermax Liqtech TR4 360): For the cooler, we’re using that 360mm Liqtech unit. We could have gone with the NHU-14S, but we wanted two things: One, the ability to scale-up our GPU configuration to fit as many cards as possible for rendering, which means needing slot clearance, and two, lower noise-normalized performance. The Liqtech cooler is expensive, though, so anyone who can get by on air or is going for a single-card configuration would do excellently with the Noctua NH-U14S, and it is much cheaper. That’s a perfectly good cooler, as our thermal testing charts show. The Liqtech is a high-quality, well-built block that is easily refilled, so we’ve liked it thus far. We’ve been very impressed with the Liqtech TR4’s overall quality – honestly, speaking frankly here, we typically wouldn’t attribute the quality of the TR4 Liqtech unit to Enermax, but they’ve done well on this one.
Here’s a thermal vs. time chart for the Liqtech cooler under an intense load. For this workload, we were torturing the system with a simultaneous Prime95 AVX workload and a moderate GPU workload, meaning we’re throwing off nearly the maximum heat potential in the case. Even still, we’re at around 56.8 degrees Celsius for Tdie, with the GPU.
Alternative: Noctua NH-U14S TR4
Storage: Whatever you want
We’re actually going to be using network storage for this, but a RAID configuration would be well-suited in this build. Data redundancy and speed requirements vary heavily based on the user, so this is really an area where you should hopefully already have an idea of your needs, if you were going to build a similar system. We have used WD Reds for RAID 5 in production systems that have limited drive bays, though a RAID 10 setup might also be good for a machine like this. We’re just going to run an SSD for the OS and a 10-gigabit cable to our local server, so that solves all of our storage concerns instantly.
That’s it for this one. It’s doing well in our rendering and will soon be unbuilt, but this was a fun project for the weekend.
Editorial, Testing: Steve Burke
Build, Testing: Patrick Lathan
Video: Andrew Coleman