Tearing open the RX Vega 56 card revealed more of what we expected: A Vega Frontier Edition card, which is the same as Vega 64, which is the same as Vega 56. It seems as if AMD took the same PCB & VRM run and increased volume to apply to all these cards, thereby ensuring MOQ is met and theoretically lowering cost for all devices combined. That said, the price also increases in unnecessary ways for the likes of Vega 56, which has one of the most overkill VRMs a card of its ilk possibly could -- especially given the native current and power constraints enforced by BIOS. That said, we're working on power tables mods to bypass these constraints, despite the alleged Secure Boot compliance by AMD.
We posted a tear-down of the card earlier today, though it is much the same as the Vega: Frontier Edition -- and by "much the same," we mean "exactly the same." Though, to be fair, V56 does lack the TR6 & TR5 screws of FE.
Here's the tear-down:
“Indecision” isn’t something we’ve ever titled a review, or felt in general about hardware. The thing is, though, that Vega is launching in the midst of a market which behaves completely unpredictably. We review products as a value proposition, looking at performance to dollars and coming to some sort of unwavering conclusion. Turns out, that’s sort of hard to do when the price is “who knows” and availability is uncertain. Mining does all this, of course; AMD’s launching a card in the middle of boosted demand, and so prices won’t stick for long. The question is whether the inevitable price hike will match or exceed the price of competing cards. NVidia's GTX 1070 should be selling below $400 (a few months ago, it did), the GTX 1080 should be ~$500, and the RX Vega 56 should be $400.
Conclusiveness would be easier with at least one unchanging value.
Storing multiple terabytes of video content monthly is, obviously, a drive-intensive business -- particularly when using RAID for local editing scratch disks, a NAS for internal server access, and web remote backup. Rather than buy more drives and build a data library that is both impossible to manage and impossible to search, we decided to use our disks smarter and begin compressing broll as it falls into disuse. Deletion is the final step, at some point, but the compression is small enough as to be a non-concern right now. We're able to compress our broll anywhere from 50-86%, depending on what kind of content is contained therein, and do so with nearly 0 perceptible impact to content quality. All that's required is a processor with a lot of threads, as that's what we wrote our compression script to use, and some extra power each month.
Threadripper saw use recently in a temporary compression rig for us, as we wanted to try the CPU out in a real-world use case for our day-to-day operations. The effort can be seen below:
Computers have come a long way since their inception. Some of the first computers (built by the military) used electromagnets to calculate torpedo trajectories. Since then, computers have become almost incomprehensibly more powerful and accessible to the point at which the concept of virtual reality headsets aren’t even science fiction.
In gaming PCs, these power increases have often been used to ensure higher FPS, faster game mechanics, and more immersive graphics settings. Despite this, the computational power in modern PCs can be used for a variety of applications. Many uses such as design, communication, servers, etc. are well known, but one lesser known use is contributing to distributed computation programs such as BOINC and Folding@Home.
BOINC (Berkeley Open Infrastructure for Network Computing) and Folding@home (also sometimes referred to as FAH and F@H) are research programs that utilize distributed computing to provide researchers large amounts of computational power without the need of supercomputers. BOINC allows for users to support a variety of programs (including searching for extraterrestrial life, simulating molecular simulations, predicting the climate, etc.). In contrast, Folding@home is run by Stanford and is a singular program that simulates protein folding.
First we’ll discuss what distributed computing is (and its relation to traditional supercomputers), then we’ll cover some noteable projects we’re fond of.
Visiting AMD during the Threadripper announcement event gave us access to a live LN2-overclocking demonstration, where one of the early Threadripper CPUs hit 5.2GHz on LN2 and scored north of 4000 points in Cinebench. Overclocking was performed on two systems, one using an internal engineering sample motherboard and the other using an early ASRock board. LN2 pots will be made available by Der8auer and KINGPIN, though the LN2 pots used by AMD were custom-made for the task, given that the socket is completely new.
The launch of Threadripper marks a move closer to AMD’s starting point for the Zen architecture. Contrary to popular belief, AMD did not start its plans with desktop Ryzen and then glue modules together until Epyc was created; no, instead, the company started with an MCM CPU more similar to Epyc, then worked its way down to Ryzen desktop CPUs. Threadripper is the fruition of this MCM design on the HEDT side, and benefits from months of maturation for both the platform and AMD’s support teams. Ryzen was rushed in its weeks leading to launch, which showed in both communication clarity and platform support in the early days. Finally, as things smoothed-over and AMD resolved many of its communication and platform issues, Threadripper became advantaged in its receipt of these improvements.
“Everything we learned with AM4 went into Threadripper,” one of AMD’s representatives told us, and that became clear as we continued to work on the platform. During the test process for Threadripper, work felt considerably more streamlined and remarkably free of the validation issues that had once plagued Ryzen. The fact that we were able to instantly boot to 3200MHz (and 3600MHz) memory gave hope that Threadripper would, in fact, be the benefactor of Ryzen’s learning pains.
Threadripper will ship in three immediate SKUs:
Respectively, these units are targeted at price-points of $1000, $800, and $550, making them direct competitors to Intel’s new Skylake-X family of CPUs. The i9-7900X would be the flagship – for now, anyway – that’s being more heavily challenged by AMD’s Threadripper HEDT CPUs. Today's review looks at the AMD Threadripper 1950X and 1920X CPUs in livestreaming benchmarks, Blender, Premiere, power consumption, temperatures, gaming, and more.
This episode of Ask GN (#56) revisits the topic of AMD's Temperature Control (TCTL) offset on Ryzen CPUs, aiming to help demystify why the company has elected to implement the feature on its consumer-grade CPUs. The topic was resurrected with thanks to Threadripper's imminent launch, just hours away, as the new TR CPUs also include a 27C TCTL offset. Alongside this, we talk Threadripper CPU die layout diagrams and our use of dry erase marker (yes, really), sensationalism and clickbait on YouTube, Peltier coolers, Ivy Bridge, and more.
For a separate update on what's going on behind the scenes, our Patreon backers may be happy to hear that we've just posted an update on the Patreon page. The update discusses major impending changes to our CPU testing procedure, as Threadripper's launch will be the last major CPU we cover for a little while. Well, a few weeks, at least. That'll give us some time to rework our testing for next year, as our methods tend to remain in place for about a year at a time.
Following an initial look at thermal compound spread on AMD’s Threadripper 1950X, we immediately revisited an old, retired discussion: Thermal paste application methods and which one is “best” for a larger IHS. With most of the relatively small CPUs, like the desktop-grade Intel and AMD CPUs, it’s more or less been determined that there’s no real, appreciable difference in application methods. Sure – you might get one degree Centigrade here or there, but the vast majority of users will be just fine with the “blob” method. As long as there’s enough compound, it’ll spread fairly evenly across Intel i3/i5/i7 non-HEDT CPUs and across Ryzen or FX CPUs.
Threadripper feels different: It’s huge, with the top of the IHS measuring at 68x51mm, and significantly wider on one axis. Threadripper also has a unique arrangement of silicon, with four “dies” spread across the substrate. AMD has told us that only two of the dies are active and that it should be the same two on every Threadripper CPU, with the other two being branded “silicon substrate interposers.” Speaking with Der8auer, we believe there may be more to this story than what we’re told. Der8auer is investigating further and will be posting coverage on his own channel as he learns more.
Anyway, we’re interested in how different thermal compound spreading methods may benefit Threadripper specifically. Testing will focus on the “blob” method, X-pattern, parallel lines pattern, Asetek’s stock pattern, and AMD’s recommended five-point pattern. Threadripper’s die layout looks like this, for a visual aid:
Because of the spacing centrally, we are most concerned about covering the two clusters of dies, not the center of the IHS; that said, it’s still a good idea to cover the center as that is where the cooler’s copper density is located and most efficient.
Our video version of this content uses a sheet of Plexiglass to illustrate how compound spreads as it is applied. As we state later in the video, this is a nice, easy mode of visualization, but not really an accurate way to show how the compound spreads when under the real mounting force of a socketed cooler. For that, we later applied the same NZXT Kraken X62 cooler with each method, then took photos to show before/after cooler installation. Thermal testing was also performed. Seeing as AMD has permitted several other outlets to post their thermal results already, we figured we'd add ours to the growing pool of testing.
Modern humans used to hang undesirables in the town square and light “witches” aflame. For lack of a witch, PC hardware enthusiasts prefer to seek out companies that other internet users have suggested as wrongdoers. Legitimate or not, the requirement to stop and think about something need not apply here – we need more rage for the combustion engine; this thing doesn’t run on neutrality.
Posting a concern about a product, like Reddit user Kendalf did, cannot be praised enough. This type of alert gets attention from manufacturers and media alike, and means that we can all work together to determine if, (1) there is actually an issue, and (2) how we can fix it or work around it. The result is stronger products, hopefully. As stated in the lengthy conclusion below, though, it’s an unfortunate side effect that other commenters then elect to blow things out of proportion for need to feel upset about something. There’s always room for a one-off defect, for misunderstandings of features, or just a bad batch. There’s also room for a manufacturer to really screw up, so it just depends on the situation. Ideally, the mobs remain at bay until numerous people have actually verified something.
This week’s hardware news recap goes over some follow-up AMD coverage, closes the storyline on Corsair’s partial acquisition, and talks new products and industry news. We open with AMD RX Vega mining confirmations and talk about the “packs” – AMD’s discount bundling supposed to help get cards into the hands of gamers.
The RX Vega discussion is mostly to confirm an industry rumor: We’ve received reports from contacts at AIB partners that RX Vega will be capable of mining at 70MH/s, which is something around double current RX 580 numbers. This will lead to more limited supply of RX Vega cards, we’d suspect, but AMD’s been trying to plan for this with their “bundle packs” – purchasers can spend an extra $100 to get discounts. Unfortunately, nothing says those discounts must be spent, and an extra $100 isn’t going to stop miners who are used to paying 2x prices, anyway.
Show notes below.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.