This script was written in Powershell (with help from Google and GN’s Patrick Lathan & Jim Vincent), pulls the Handbrake CLI, and crunches all *.MTS files in the /2016/ folder into smaller *-compress.mp4 files. This allows the script to skip our .mp4 final product (those are never compressed, in case of emergency), but still reduces our storage consumption by upwards of 90% on some files, or ~70% average across all files. The script executes recursively on its host folder, and digs through everything for the current year, then executes on permitted files.
We prohibit execution on files which are newer than 90 days old (thanks, Patrick), useful to ensure that anything semi-recent is still available in the event the content is incomplete or the B-roll is actively used. An example would be some late-2016 video card footage – we still want that fully available, since we’re still using it in regular videos. Even still, though, the compression is largely lossless to the human eye. We tested and worked out Handbrake settings that retain a high quality output, but still manage to shrink the overall file size. For example, one of our tear-down videos went from ~1.23GB to ~380MB after compression.
Once the compression pass is done, our script runs a simple check to see if Handbrake output a success or failure (0 or 1) for the last task, then decides what to do next. If Handbrake succeeded, the old file is deleted permanently and we are left only with the compressed file. If Handbrake failed – it hasn’t yet – the script will halt the task, leave the original file, and wait for human input to figure out what to do next.
We’ll next build-in logging capabilities, since those could be useful for processing thousands of files and figuring out failures/successes that are critical.
In addition to all of this, we’re now backing up our final video product to ensure safety of our work. It’d be poor business practice not to, since we invest so much time and money into producing the content – makes sense to protect it on external storage devices in the event of local HDD failures.
This is a two-step process: First, we move all the files in our /finished/ folder to a local Synology DS1515+ NAS with 5x 4TB HDDs. This gives us network access across all devices, gets them off the primary production machine, and prepares the files for the second step. Second, we use Synology’s (impressive) DiskStation software to automatically upload our /finished/ files to a cloud storage service. This is done with a data-rate cap so as not to intrude on normal YouTube uploads, and is scheduled for 6AM every night.
The first step is completed by using a Robocopy script, which we’re happy to share:
- robocopy Z:\2016\Finished Y:\Finished\2016 /e /XA:SH /dcopy:T
This is saved as a batch file, then called with Windows Task Scheduler twice per month in early morning. Robocopy (robust copy) is exceptionally fast and highly customizable, making it a bit better for our needs than a Synology solution (although they do exist, we wanted full control). Robocopy is what we used to automatically backup local folders to specific folders in our Synology NAS. Synology software is what we use to upload to a cloud solution.
That’s your behind-the-scenes for the week! Pretty fun stuff. The video has a bit more depth on these challenges and resolutions.
Editorial: Steve “Lelldorianx” Burke
Video: Andrew “ColossalCake” Coleman