02:27 | AMD Confronted By Own Past Bullsh#@
AMD’s RX 6500 XT is easily one of the worst video cards we’ve ever reviewed, at least part of whose reason is because it both cuts down to 4 PCIe lanes and cuts memory to 4GB; as we said in our review, either of these might have been doable, but making both cuts was a major blow to performance.
Kitguru noticed that AMD’s pompous, chest-beating blogpost about 4GB of VRAM being insufficient, made in 2020, had suddenly disappeared. That’s right: When AMD had nothing to offer except adding pointless amounts of VRAM to cards, like 16GB on cards from years ago, it leaned heavily on marketing against NVIDIA’s often lower-VRAM solutions.
Before the launch of the 4GB RX 6500 XT, AMD thought it would be a good idea to hide its old blogpost that stated 4GB VRAM is “not enough for today’s games.”
AMD reinstated the post after being called-out, but there’s no way that this was an accident. AMD’s other posts remained accessible during the time, so it’s not like it was a server outage.
How the turn tables: The RTX 3050, launching soon and in direct opposition to the 6500 XT, will have 8GB of VRAM while AMD runs 4GB and a PCIe Gen4 x4 lane count, functionally guaranteeing that the card will have to communicate with system memory as it burns through its VRAM availability, thus tanking performance as shown in our benchmarks.
06:07 | Intel Makes Bitcoin Mining ASIC
Intel recently announced that it is developing a Bitcoin-mining ASIC named BZM2, which will be an ultra-low power Bitcoin miner. The nature of being an ASIC, or Application-Specific Integrated Circuit, is that the hardware isn’t particularly capable of doing anything outside of its one task: Mining Bitcoin. ASICs have existed for a long time and GPU mining has been dead for years on Bitcoin, so Intel will be competing primarily with other ASIC miners in the space, like Bitmain. Intel already has customers for its new chip.
Anandtech wrote that the new miner will use a 7nm process and run at 2.5W while producing 137GH/s. This is for the chip, not the complete system, so it follows that more power will be necessary for the box that ultimately comes out of the BZM2 chip.
At its current numbers, Intel’s miner would be among the most efficient solutions on the market. Intel’s reasoning for entering the market is simple and should be predictable, as an amorphous publicly traded entity: It sees money here, and states that crypto mining will grow for at least the next 4 years.
We previously reported on Intel rejecting the prospect of hard-blocking mining applications on its consumer Arc GPUs, but if those end up being good at mining, they’d be more likely deployed for Ethereum prior to the switchover.
09:01 | AMD RAMP: An XMP Alternative
In a list of upcoming changes to the HWINFO software, it was spotted (via Wccftech) that the program is bringing support for AMD’s yet-to-be-announced “AMD RAMP.” However, the text mentioning AMD’s RAMP has now been replaced with “Enhanced support of future AMD AM5 platforms.” Though, it also seems that the developer behind HWINFO took to the Computer Base forums to confirm both the existence and function of AMD’s RAMP, which is essentially a modern successor to the company’s previous memory acceleration and profile technology, AMP.
AMD has long toiled to bring an XMP counterpart to market, both in the form of AMP and A-XMP. However, neither one of those really took off, and they certainly did little to crack Intel’s iron-clad grip on memory overclocking and on-board memory profiles, which Intel uses as massive leverage for both its marketing efforts and to line its coffers, as there are royalty costs associated with manufacturers using XMP on their motherboards. That latter point is why we’ve seen vendors like Asus release their own memory overclocking tech for AMD-based boards, like DOCP (Direct Over Clock Profile).
With AMD’s RAMP, AMD is looking to take another shot at memory profiles and acceleration, as well as introduce a more official counterpart to Intel’s XMP. AMD RAMP is set to be supported on AMD’s upcoming Socket AM5 and will work with DDR5, starting with Ryzen 7000/Zen 4.
11:02 | ASRock First to Offer Zen 3 Support on X370
Not long ago, AMD publicly stated that the company was actively looking into how best to implement support for Zen 3 CPUs on the older 300-series chipset. At the time, AMD cited some of the unique challenges it was facing with adding support for newer CPUs on older chipsets, such as the well-documented 16MB SPI ROM chip struggles, as well as more power hungry CPUs being dropped into motherboards that aren’t up to the task of powering them.
At any rate, it seems that either ASRock has jumped the gun, or AMD has made a whole lot of progress in a short time, or some of both. Over at ASRock’s website, the company is now listing several Ryzen 5000 CPUs as being supported on ASRock’s X370 Pro4 motherboard. Such SKUs include the Ryzen 9 5950X and Ryzen 9 5900X. There’s also support for several of AMD’s Renoir-based CPUs as well. ASRock notes that many of these models will require a BIOS update using another CPU.
This could have some interesting implications, and we’ll wait and see what other motherboard makers join in and follow ASRock’s lead.
12:29 | Nvidia Released RTX 3080 12GB SKU
Nvidia has unceremoniously launched one of its worst-kept secrets in recent memory: The RTX 3080 12GB. Nvidia didn’t bother with any press release or marketing hype, instead relegating the news of the new SKU to a passing mention in an article for Nvidia’s DLSS support and new Game Ready Driver package for the recent PC port of God of War. Nvidia did state that the new RTX 3080 12GB SKU is not intended as a successor to the vanilla RTX 3080 (10GB), but will exist as an additional SKU, further complicating NVIDIA’s lineup.
As for the RTX 3080 12GB itself, the card of course brings an increased VRAM buffer, but there’s also a few changes under the hood. With 12GB of memory capacity, Nvidia’s latest RTX 3080 variant should benefit from the expected memory bandwidth improvement that comes with packing 12GB of GDDR6X onto the card, but the new SKU will also benefit from a slightly wider 384-bit memory bus; the vanilla RTX 3080 comes equipped with a 320-bit memory bus. The GDDR6X seems to be of the same 19Gbps variety found on the RTX 3080 10GB. That should put the RTX 3080 12GB with an on-paper memory bandwidth of around 912GB/s, versus the 760GB/s of the RTX 3080 10GB.
The RTX 3080 12GB also uses Nvidia’s GA102 silicon, albeit a more capable version thanks to Nvidia enabling an additional two (2x) SMs over the GA102-200 variant found in the RTX 3080 10GB, which brings the SM count up to 70 on the RTX 3080 12GB as opposed to the 68 SMs found on the RTX 3080 10GB. This changes up the rendering engine a bit, as the shader count for the RTX 3080 12GB increases to 8,960 (Cuda cores) and brings the RT cores up to 70.
Elsewhere, the base clock has actually been dialed down a bit to 1.26GHz, a slight decrease over the 1.44 GHz base clock found on the RTX 3080 10GB. The boost clock remains at 1.71GHz for both SKUs. This is most likely a tradeoff for the increased 350W TDP, and the extra power required to power additional memory and transistors.
Nvidia isn’t selling a Founders Edition of the RTX 3080 12GB, nor is the company offering any official MSRP or pricing guidance. That means customers are at the mercy of the inflated GPU market and Nvidia’s AIB partners when it comes to determining a baseline price. As the RTX 3080 12GB is effectively sandwiched between the RTX 3080 and RTX 3080 Ti, we’d expect an MSRP to be something south of the $1,200 MSRP for the RTX 3080 Ti. However, given the current price and supply climate surrounding GPUs, we’re already seeing prices swelling to $1,400 for certain RTX 3080 12GB models that have trickled onto the market.
15:13 | Puget Systems’ Most Reliable Hardware of 2021
Puget Systems, a company specializing in custom workstations and servers, has spent the last couple of weeks recapping some trends it has gleaned in sifting through its sales data. It’s through this lens that Puget Systems is now sharing a look into the company’s failure rates (be it in the field or in the shop) for the hardware used in its workstations over the last few years, with an end-goal of determining the most reliable hardware the company used in 2021.
Puget Systems is defining “shop failures” as anything that would be typically considered DOA (dead on arrival) or a failure that was caught during Puget Systems’ burn-in and stress testing process, whereas “Field Failures” are when the component fails while in possession of the customer.
With CPUs, Puget noted that Intel's Xeon W-3300 and Core 12th Gen product lines are new enough that the company doesn’t have enough sales data to fairly compare them to other CPUs. Beyond that, Purged observed that Intel’s Xeon W-2200 series and Xeon Scalable family were among the most reliable CPUs in its workstations, while Intel’s 11th-gen Core series saw the most recorded failure – both internally and in the field with customers. The failure rate was 5.3% internally.
For memory, Puget split its chart into three categories: DDR4 3200 MHz, DDR4 3200 MHz ECC, and DDR4 3200 MHz ECC Registered. Puget stated that all of the memory it uses for its systems is DDR4, and it is all clocked at 3200 MHz, though the company expects that to change in 2022 with the arrival of DDR5. At any rate, regular good DDR4 saw the most combined failure rates, while DDR4 ECC Registered proved to be the most reliable, perhaps to no one’s surprise.
With GPUs, Puget uses Nvidia’s cards and has broken the charts out as follows: NVIDIA GeForce RTX 30 Series Founders Edition; Asus, EVGA, Gigabyte, MSI, and PNY GeForce RTX 30 Series; NVIDIA Quadro RTX Series; and NVIDIA Professional RTX A Series. The Nvidia RTX Professional A-Series is Nvidia’s successor to the Quadro line, as Nvidia dropped the Quadro branding last year for upper-end RTX workstation cards. Interestingly, Puget’s data shows Nvidia FE cards faring better than those from Nvidia AIB partners, and the RTX Quadro Series has the most shop failures of any card. Puget noted that this is due to a manufacturing problem with the USB-C VirtualLink port on the RTX Quadro 4000-Series cards that affected a large portion of Puget’s inventory.
For storage, Puget’s data shows both WD Red HDDs and WD Ultrastar HDDs having the most combined failure rates, and Puget noted the failure rates for both groups were similar, despite being targeted at very different customers. Alternatively, Samsung’s 870 EVO and QVO SSDs had zero failures across more than 1,000 units sold in Puget’s machines.
17:49 | Sony to Migitage PS5 Shortage with More PS4s
In a report from Bloomberg, Sony will maintain production of its PS4 console throughout 2022, despite unconfirmed reports that the company could cease production after 2021. Bloomberg, citing sources familiar with the matter, states that Sony told its assembly and manufacturing partners late last year that it would continue PS4 production into 2022.
By keeping the PS4 alive, Sony will add “about a million” PS4 consoles to the supply this year and offer prospective customers a cheaper alternative while offsetting the constrained supply of Sony’s PS5 consoles. Additionally, Bloomberg states that this strategy is “seen within the company as a means to fill the supply vacuum and keep gamers within the PlayStation ecosystem, according to a Sony official who is not authorized to speak publicly.”
18:43 | AMD Aims to Avoid Intel’s “ILM” Woes With AM5
In a new report from Igor’s Lab, it seems AMD’s transition to an LGA package with Ryzen 7000 and Socket AM5 will aim to avoid some of Intel’s missteps with its LGA1700 socket and Alder Lake. Specifically, AMD’s retention mechanism for Socket AM5 will diverge quite a bit from that of Intel’s.
In renders and CAD diagrams obtained by Igor, AMD’s retention mechanism for its LGA Ryzen CPUs will feature a separate retention plate that sits behind the motherboard, and will be replete with four insulated and threaded standoffs that will allow AMD’s Socket Actuation Mechanism (SAM) to be installed. There are another four threaded standoffs for heatsink/cooler mounting. All of this allows for a lot more rigidity, as well as adding four more points to spread out the mechanical stress of both the cam lever and the CPU cooler, which will help avoid bending the IHS or CPU package itself. It’s more similar to the X99 socket, in that regard. This is more expensive to implement, but would solve problems.
This is something that Intel has struggled with, and something Igor as well as Buildzoid have tested thoroughly. With Intel’s LGA1700 socket and Alder Lake, Intel’s cam lever (clamp), or as Intel calls it, the "Independent Loading Mechanism”, is fixed to the motherboard alone. This leads to uneven pressure applied to the CPU horizontally, and has seen some of Intel’s Alder Lake CPUs suffer a slightly bent CPU package, which in turn leads to a gap between the CPU IHS and the coldplate of the cooler.
As we said, AMD is looking to avoid this by spreading out the clamping force across several points via a rigid backplate installed behind the motherboard, but may pose a problem for coolers that require their own backplates. We’ll see how AMD plans to address this as we approach the Ryzen 7000/ Zen 4 launch later in 2022.
21:47 | Noctua Roadmap 2022: New Dual-Tower Cooler
Noctua has updated its product roadmap for 2022, and the upcoming products include a new 120mm dual tower CPU cooler, white fans, and the liberal use of “next-generation” as an adjective to describe upcoming products. That said, it seems Noctua does have several updates to existing product lines planned for this year, especially as it pertains to fans and CPU coolers.
Likely one of the most noteworthy pieces of news is the addition of a white colorway to Noctua’s existing fans – Noctua’s long-awaited black Chromax fans with interchangeable anti-vibration pads were well received, and a welcome change from the de facto Noctua brown color scheme. Noctua actually had white fans on display at Computex 2019 (presumably prototypes) and, at the time, was targeting a 2020 release. Noctua is now targeting a Q4’ 2022 release date for its white fans.
Additionally, Noctua is planning a next-generation NH-D15. Noctua’s NH-D15 is one of its flagship coolers and has been around since 2014. What exactly Noctua is planning for the new model, though, is anyone’s guess. Noctua is looking at a Q4’ 2022 launch window for the new NH-D15 as well.
Elsewhere, Noctua is looking to expand its catalog to include a 24V-to-12V converter, an 8-way fan hub, and will be expanding its line of 24V fans to include 40mm options. Noctua also has new Intel Xeon coolers in the oven, and even a desk fan in the works.
Writer: Eric Hamilton
Additional Reporting: Steve Burke
Video: Keegan Gallick