Site News stub

GN's Lab: Our High-End Video Production & NAS Rig

Posted on August 11, 2015

I've never before had GN's lab so fully equipped. It's an exciting period of growth for us. For the first time in the site's seven-year history, it feels appropriate to slap the “lab” label on our multi-room testing setup. Following weeks of intensive cleaning and organization efforts, we've now got shelving units installed (all connected to ground and ESD compliant) that house motherboards, video cards, CPUs, and more – much bought out of pocket – and a complex network of systems.

The brain of the network is our rendering rig, which sits opposite my main production system. The rendering rig is used exclusively to render and edit videos for GamersNexus, with the primary user being GN's Keegan Gallick. The system hosts RAID HDDs that are utilized for all of our test data, video media, and photo media.

We put together a quick video showcasing the rig:

The specs are as follows:

 ComponentCourtesy Of
CPUIntel i7-4960X Six CoreiBUYPOWER
MotherboardASUS Rampage IV Black EditionGamersNexus
Memory32GB HyperX Beast DDR3-2133
+ 32GB HyperX Beast (other kits)
Memory tuning in BIOS, see below.
Kingston
Video CardMSI GTX 980 Gaming 4GCyberPower (Syber)
SSDs (Primary)2x HyperX Savage SSD 240GB
RAID 0 Striped
Kingston
HDDs (Archival)3x 2TB WD Red
RAID 5 Striped / Mirrored
GamersNexus
CaseNZXT H440NZXT
CoolingAsetek 550LCAsetek
PowerBe Quiet! Dark Power Pro 10Be Quiet!
SwitchLinksys Gigabit SwitchGamersNexus

 

We're doing in-house benchmarking on CPU overclocking, memory overclocking, and memory capacity (is it better to have 32GB of faster memory or 64GB of slower memory for editing? We want to find out). These will likely be published in shorter form out of pure curiosity. Until a point at which I can present hard data, I'm excited to note that the 4960X X79 6-core CPU, quad-channel RAM, and CUDA acceleration allows us to output videos at a 1m-to-1m rate. By this, I mean that it takes approximately one minute to render one minute of footage, processed using H.264 at 28Mbps (1080p60). Applying color correction and two-pass encoding (eliminates ghosting) doubles the time, which really isn't bad. Post processing effects bringing a 10-minute video to a 20-minute render period is completely agreeable.

We've also, very sadly, “upgraded” the internet to a 5Mbps uprate, which is among the fastest in the area. Google Fiber is expected later this year and will assist in efforts, but for now, we've got 5Mbps up. That means it takes approximately 5.5 seconds to upload 1 second of footage. Better not to think of it that way, really. The internet is now our biggest bottleneck.

The OS is installed on RAID 0 SSDs (HyperX Savage, which we granted an Editor's Choice award) at 240GB each, supplying somewhere in the range of 400GB-460GB of usable space. RAID 0 stripes the SSDs, so we don't have data redundancy on the primary drive configuration. This was a conscious decision. Because the SSDs will strictly house active media (media in the edit bay), the OS, and software, we'd rather opt for speed over redundancy. It is trivial to reinstall the platform in the event of a RAID or drive failure. We've created images on our in-house image distribution server in the event of a failure, which allows us to redeploy the entire configuration in about an hour, assuming a drive drops from RAID.

Our media storage is handled via 3x 2TB WD Red HDDs, configured in RAID 5 (striped and mirrored). This combines into 4TB of usable space, with the remaining third of storage capacity being allocated to redundancy. We lose some speed by building a RAID 5 configuration, but this grants us two things we want: Cost savings (by avoiding RAID 10, which requires a fourth drive) and redundant data (which we want for our media capture – don't want to lose years of video). If a drive in our RAID 5 config fails, we can hot-swap the drive with a replacement and rebuild the RAID live.

A spare GTX 980 (non-Ti) has been deployed for use in CUDA acceleration. It is our untested theory that some of our Kepler cards may be higher-performing for this task (much of the gaming optimization of Maxwell is lost on arithmetic render output), but we don't have spare Kepler units for permanent use in the render rig.

render-rig-2-sm

The motherboard is an ASUS Rampage IV Black Edition that was purchased for use with the 4960X. We've already put the board's intensive BIOS settings to use, modifying individual memory modules on the RAM to ensure compatibility across our various kits of HyperX DDR3 Beast memory. We've got one kit of 4x8GB at 2133MHz with fast timings, but two other kits of unmatched timings and speeds. Using the advanced BIOS tweaking settings, we were able to achieve relative stability across all memory kits and avoid crashing – which happened before fine-tuning, as expected when mixing and matching kits. I've got Patrick Stone, our hardware editor, to thank for his patience and technician work on manipulating the RAM and BIOS.

We are considering a CPU overclock in the near future, though we're holding off on that for now. The CPU is presently cooled by a mid-range Asetek 550LC liquid cooler, which was pulled from our Copper vs. Aluminum Coldplate benchmarking endeavors. We're certainly pushing the limits of the cooler when under load. In the event we opt to overclock the CPU, it is possible we will move to a higher-end Asetek unit once benchmarking is complete on those.

The PSU used is a bit unnecessary, but it's something we had available. The unit installed is Be Quiet!'s Dark Power Pro 10 1200W PSU. We only need about 650-700W (safely) for this ensemble, but an 80 Plus Gold PSU with an insane level of protections made good sense. In power flickers, the Be Quiet! unit has remained up thanks to a long hold by the capacitors. This warrants use in the render rig, wattage overkill or no.

And for the enclosure, it's all housed in NZXT's H440, which we also awarded an Editor's Choice badge. The enclosure was selected for its sturdy build and sleek design. A mid-tower also allows easier tucking-away of the build so that it can more comfortably live on the floor. We wanted something that would be easy to clean and maintain going forward. The case is at its limits with HDD support, so we may be faced with requirement of a large solution in the next 1-2 years.

Finally, the entire lab is networked through a new Linksys Gigabit switch that we bought. I'm very happy with the switch purchase and will be expanding our networking capabilities in the future. With Patrick Stone's assistance, we made a few Ethernet cables to wire two test benches, my main system, the render rig, and a couple other cycling machines. The switch allows near-instant transfer of large media and test files between test benches and production machines.

Just thought I'd share. We're very excited about the growth of the site and the increased efficiency granted by this high-end lab setup. I'll post more 'GN Lab Tour' items in the future, as we continue to clean up and establish permanence for machines.

- Steve “Lelldorianx” Burke.