TR Leaks to GN
GN received a leak earlier from a source in the industry that points toward information for Threadripper 3000. We received documentation intended for motherboard heatsink design for Threadripper 3000 and have learned a few things about support for Workstation and HEDT users. The core count and thread count were not revealed in the documentation we received, but the document, if it proves trustworthy, did claim 512KB of L2 cache per core, omitted L3 cache information, and claimed 4-channel DDR4 with ECC up to 3200MT/s. This unit was titled sTRX4 HEDT, while the sWRX8 workstation unit markets 8-channel DDR4 with ECC up to 3200MT/s.
|Leaked TR Data to GamersNexus
(data table re-arranged to protect source)
|sTRX4 HEDT||sWRX8 Workstation|
|Compute||TBD cores/threads||TBD cores/threads|
|512KB L2 Cache per core||512KB L2 Cache per core|
|TBD L3 Cache||TBD L3 Cache|
|Memory||4-channel DDR4, ECC 3200MT/s||8-channel DDR4, ECC 3200MT/s|
|UDIMM||UDIMM, RDIMM, LRDIMM|
|2 DIMMs/channel||1 DIMM/channel|
|256GB/channel capacity||256GB/channel capacity|
|OC support||No OC support|
|Integrated IO||64 lanes of PCI Express Gen4
16 switchable lanes with SATA
|96-128 lanes PCIe 4
32 switchable to SATA
|UART, USB, eSPI, SPI, LPC, I2C||UART, USB, eSPI, SPI, LPC, I2C|
AMD’s document notes that sTRX4 and sWRX8 are single-socket client platforms that use surface-mount LGA sockets supporting Family 17h models 30h to 3Fh, which seems to include some previous generation CPUs. The alleged document includes a note that sTRX4 should include 64 lanes of PCIe Gen4 with 16 lanes switchable for 16 lanes of SATA, while sWRX8 indicates 96-128 lanes of PCIe Gen4, allowing for 32 switchable lanes with SATA. As for overclocking support, sTRX4 is marked as “yes,” while sWRX8 is marked as “no.” It seems that sWRX8 is a budget version of Epyc.
|sTRX4/sWRX8 Thermal Requirements Leaked to GamersNexus|
|Group||TDP||Tambient||Tcase MAX||Tctl MAX||Thermal Resistance (C/W)|
The document also notes thermal design requirements, and states that the CPUs will have two infrastructure groups: A and B, with A for HEDT assuming a 32C ambient, and B for WS assuming a 42C ambient. Group A has a TDP presently noted as 280W, a Tcase max of 60 degrees Celsius, and a Tctl max of 100 degrees.
We also received voltage requirements and other information, but will require more time to crawl through the extensive documentation to better understand it. We’ll report back with more information in the next news video.
Separately, HWINFO page added TR 3000 “preliminary support.”
Intel’s Marketing Needs Work
Intel recently re-embarked on its “realistic workload benchmarks” discussion at IFA, calling-out AMD for leaning so heavily on Cinebench in its testing. Intel has previously remarked, roughly in the July timeframe, that its Core i9 CPUs perform better in “Windows desktop applications,” “web performance,” and “AAA PC gaming.” More recently, the IFA discussion talks about notebooks and uses 2-in-1 utilization of rendering applications to downplay overall importance of said applications.
A few things here: First of all, we agree and have shown that Intel performs better in gaming applications when strictly comparing SKU-to-SKU, although value proposition is not eternally in Intel’s favor. Intel, we think, has forfeited the i5 segment to R5. Although the 9600K is technically a leader in raw FPS, the gap is small enough and the deficit elsewhere is large enough that we no longer recommend i5 CPUs and instead point toward R5 3600 CPUs. That stated, Intel’s 9700K is better positioned against the 3700X in gaming while having comparable price, and the 9900K obviously is still the chart leader in raw gaming performance, albeit it is expensive.
Now, all of that stated, we strongly disagree with some of Intel’s approach to this topic. For one, Intel downplays production performance overall where AMD does well -- to nobody’s surprise, just like AMD downplays Intel’s gaming performance -- but Intel does this primarily by beating the same dead horse that Cinebench isn’t representative of all use. We agree, it’s not, and we actually don’t use Cinebench for reviews anymore because we believe Blender to be a better, more widely used application, and we actually use it internally for our own workloads. That said, AMD is also winning in Blender, and although we don’t like Cinebench, it isn’t too dissimilar from Blender performance.
AMD does well with Cinebench because it can fit everything it needs for a tile into cache, for the most part, and so it never has to reach outside of local cache to grab data. This benefits it. But tile-based rendering is also used in other applications. Even if we look to Blender, which we find much more realistic for its longer render times, AMD holds the advantage. The funny thing here is that Intel actually performs relatively better in Cinebench than Blender, and that’s because it can hold its stock boost durations for nearly the entirety of the short benchmark. A real application, like Blender, with heavy render workloads, can take variable time to complete the total render. Our benchmark is nearing 30 minutes completion time for mid-range CPUs, for instance, and is above 10 for nearly all on the chart. This is long past turbo duration, and so relative to AMD, Intel will look better in Cinebench than Blender as a result of the shorter benchmark period, which makes the whole thing sort of silly.
Also, the 3900X does better in our Premiere benchmarks than comparable Intel CPUs these days, which is a massive blow to a segment where Intel was previously advantaged. Intel still holds a very strong lead in Photoshop, to be fair, but that’s the only one where we really see frequency so heavily benefited.
Friend of the site Rob Williams of Techgage.com recently wrote an article about Intel’s new marketing, calling-out that Intel has begun contradicting its previous marketing. Previously, Intel worked with Maxon on Cinebench and C4D benchmarks to show leading performance, but now that AMD is using the same trick on Intel, Intel has decided its previous marketing is no longer accurate.
USB4 Specification Published
The USB Implementer's Forum (USB-IF) has officially published the new USB4 specification -- and yes, there's no longer a space between USB and 4. That’s a thing now. USB4 offers twice the bandwidth of USB 3.2, offering a theoretical 40 Gbps of throughput, assuming the use of certified cables.
The USB-IF announced USB4 earlier this year, as well as revealing some confusing new branding. In addition to the increased bandwidth, USB4 will be backwards compatible with previous USB specifications. There’s also “universal” Thunderbolt 3 compatability. Intel offered up the Thunderbolt 3 spec to the USB-IF, meaning any vendor wanting to offer a Thunderbolt 3 compatible device doesn’t need to license the technology. However, the catch is that Thunderbolt 3 isn’t technically required for USB4, so implementation is optional. Additionally, any device that is Thunderbolt 3 compatible will need to undergo validation, which isn’t free. So, it remains to be seen how well adopted Thunderbolt 3 will become.
USB4 will also offer better allocation between data and video bandwidth, and retain the Type-C connector. However, USB-IF will announce new specifications for the Type-C connector in the future to handle the new USB4 interface, power and data delivery.
In the wake of Ryzen 3000’s issues with reaching advertised boost clocks, Der8auer commissioned a survey to assess how pervasive the issue was. The results were seemingly worse than expected.
There’s been an abundance of confusion over Ryzen 3000’s boost clock behavior, enough to warrant AMD to tail back and update its Ryzen product pages, confirming that only one core is awarded the highest boost clock. Still, many users haven’t achieved the specified boost frequencies on any cores at all.
Der8auer’s survey polled 2,700 participants who ran the single-threaded benchmark on Cinebench R15. The maximum clock speed was recorded with HWInfo. AMD’s R5 3600 seemed to fare the best, with roughly half of users reporting boost clocks were as advertised. Faring worse however, was the R9 3900X, with only 5.6% of users seeing the correct maximum boost clocks. It seems many boost clocks were anywhere between 25-100 MHz shy of the correct clock speeds, and some were further off than that.
AMD Will Push BIOS Update to Fix Boost Clock Problems
With the abovementioned news, that leads us into AMD’s response to Ryzen 3000 boost clock issues. AMD took to Twitter to both acknowledge and update the community regarding the problem.
“AMD is pleased with the strong momentum of 3rd Gen AMD Ryzen™ processors in the PC enthusiast and gaming communities. We closely monitor community feedback on our products and understand that some 3rd gen AMD Ryzen users are reporting boost clock speeds below the expected processor boost frequency. While processor boost frequency is dependent on many variables, including workload, system design, and cooling solution, we have closely reviewed the feedback from our customers and have identified an issue in our firmware that reduces boost frequency in some situations. We are in the process of preparing a BIOS update for our motherboard partners that addresses that issue and includes additional boost performance optimizations. We will provide an update on September 10 to the community regarding the availability of the BIOS.”
So, we’ll have to wait until next week to see what AMD has in store for a fix. Here’s hoping they can get a BIOS update seeded out quickly.
AMD Renoir APUs Come With Improved Memory Support
A pair of Linux patches have ostensibly confirmed that AMD’s upcoming Renoir APUs will come equipped with an improved memory controller, supporting LPDDR4X-4266 memory. This would be a welcome upgrade over Raven Ridge and Picasso, both of which support DDR4-2400. Renoir looks to be AMD’s first chip supporting LPDDR4X memory.
We could make the argument that AMD needed to introduce broader memory support to compete with Intel’s forthcoming Ice Lake mobile chips, which support DDR4-3200 and LPDDR4-3733, and will feature the improved Gen11 graphics. The higher memory specification should theoretically only be a good thing, considering how sensitive AMD’s APUs are to memory speed.
Intel i9-9900KS & Cascade Lake-X Ship In October
As we mentioned previously, October could be crowded month for high-performance CPUs, and that seems to be what’s in store. Ryan Shrout, previously of PC Perspective, took the stage at Intel's IFA presentation to announce an October ship date for the company's Cascade Lake-X HEDT parts, and the super binned i9-9900KS.
Shrout was scarce with details, but noted that the i9-9900KS will come with a 5GHz boost clock across all cores, while also taking an opportunity to throw a bit of shade at AMD, pointing to the fact that AMD’s Ryzen 3000 chips haven't been delivering on promised boost clocks on any cores. More interestingly, Shrout also noted the increased pressure AMD has placed on Intel, saying “The point is, we’re not taking this sitting down, we see the competition, we see the landscape as it is. We’re adjusting because we take these customers very seriously. And we want to give them the best product possible.”
ASUS and Acer Out Gaming Laptops With 300Hz Displays
ASUS and Acer have proudly trotted out the first laptops boasting displays with 300Hz refresh rates. ASUS announced the ROG Zephyrus S GX701, while Acer announced the Predator Triton 500. ASUS, for its part, is pairing the ROG Zephyrus S GX701 with an RTX 2080 clocked at 1,230MHz. ASUS says the benefits of the higher refresh rate extends beyond just raw frames per second, improving frame time.
“Higher frequencies can also improve the experience when the refresh rate is fixed and the FPS drops below the maximum. The delay between refresh cycles is shorter, allowing the display to respond faster to fresh frames produced by the GPU. When the system is running Vsync to eliminate visual tearing, the higher frequency mitigates visible stuttering that can occur when the display is forced to wait until the next cycle to show a new frame. At 300Hz, it's ready to draw a complete new frame every 3.3ms, which matches the 3ms response time of the pixels," ASUS says.
Acer didn’t share any configuration details, but we can assume Acer will ship the Triton 500 with an RTX 2080 as well. We can also speculate that both companies are sourcing the panel from the same place. ASUS is expected to ship the ROG Zephyrus S GX701 in October (price TBA), with Acer pegging a December launch for the Triton 500 priced a $2,800.
August Steam Hardware Survey
Valve has released its Steam Hardware Survey for August, and the biggest highlight is the trend of AMD increasing its CPU share among survey participants -- a trend that's been ongoing since July. AMD now accounts for 19% of CPU share among Steam users, While Intel has dropped to 81%.
The GTX 1060, 1050 Ti, and 1050 are still the most popular GPUs, albeit their grasp is slipping. Meanwhile, the RTX-series continues to make steady progress, and Nvidia’s “Super” refresh has likely contributed here. The RTX 2070 is still the most popular RTX model among Steam users, and is up 0.19% over last month. The RTX 2060 comes in as a close second, up 0.24% over last month.
The RX 580 is AMD’s most popular card, still seeing growth with 0.07% increase in share over last month.
Editorial: Eric Hamilton
Host, Additional Reporting: Steve Burke
Video: Josh Svoboda