AMD’s technical press event bore information for both AMD Ryzen and AMD Navi, including overclocking information for Ryzen, Navi base, boost, and average clocks, architectural information and block diagrams, product-level specifications, and extreme overclocking information for Ryzen with liquid nitrogen. We understand both lines better now than before and can brief you on what AMD is working on. We’ll start with Navi specs, die size, and top-level architectural information, then move on to Ryzen. AMD also talked about ray tracing during its tech day, throwing some casual shade at NVIDIA in so doing, and we’ll also cover that here.

First, note that AMD did not give pricing to the press ahead of its livestream at E3, so this content will be live right around when the prices are announced. We’ll try to update with pricing information as soon as we see it, although we anticipate our video’s comments section will have the information immediately. UPDATE: Prices are $450 for the RX 5700 XT, $380 for the RX 5700.

AMD’s press event yielded a ton of interesting, useful information, especially on the architecture side. There was some marketing screwery in there, but a surprisingly low amount for this type of event. The biggest example was taking a thermographic image of two heatsinks to try and show comparative CPU temperature, even though the range was 23 to 27 degrees, which makes the delta look astronomically large despite being in common measurement error. Also, the heatsink actually should be hot because that means it’s working, and taking a thermographic image of a shiny metal object means you’re more showing reflected room temperature or encountering issues with emissivity, and ultimately they should just be showing junction temperature, anyway. This was our only major gripe with the event -- otherwise, the information was technical, detailed, and generally free of marketing BS. Not completely free of it, but mostly. The biggest issue with the comparison was the 28-degree result that exited the already silly 23-27 degree range, making it look like 28 degrees was somehow massively overheating.

amd ryzen temperature invalid

Let’s start with the GPU side.

As we board another plane, just five days since landing home from Taipei, we're recapping news leading into next week's E3 event, positioned exhaustingly close to Computex. This recap talks AMD and Samsung partnerships on GPUs, Apple's $1000 monitor stand and accompanying cheese grater, and the Radeon Vega II dual-GPUs located therein. We also talk tariff impact on pricing in PC hardware and, as an exclusive story for the video version, we talk about the fake "X499" motherboard at Computex 2019.

Show notes below the video embed.

As we’ve been inundated with Computex 2019 coverage, this HW News episode will focus on some of the smaller news items that have slipped through the cracks, so to speak. It’s mostly a helping of smaller hardware announcements from big vendors like Corsair, NZXT, and SteelSeries, with a side of the usual industry news.

Be sure to stay tuned to our YouTube channel for Computex 2019 news.

This content piece started with Buildzoid’s suggestion for us to install a custom VBIOS on our RX 570 for timing tuning tests. Our card proved temperamental with the custom VBIOS, so we ended up instead – for now – testing AMD’s built-in timing level options in the drivers. AMD’s GPU drivers have a drop-down option featuring “automatic,” “timing level 1,” and “timing level 2” settings for Radeon cards, all of which lack any formal definition within the drivers. We ran an RX 570 and a Vega 56 card through most of our tests with these timings options, using dozens of test passes across the 3DMark suite (for each line item) to minimize the error margins and help narrow-in the range of statistically significant results. We also ran “real” gaming workloads in addition to these 3DMark passes.

Were we to step it up, the next goal would be to use third-party tools to manually tune the memory timings, whether GDDR5 or HBM2, or custom VBIOSes on cards that are more stable. For now, we’ll focus on AMD’s built-in options.

This round-up is packed with news, although our leading two stories are based on rumors. After talking about Navi's potential reference or engineering design PCB and Intel's alleged Comet Lake plans, we'll dive into Super Micro's move away from China-based manufacturing, a global downtrend in chip sales, Ryzen and Epyc sales growth, Amazon EWS expansion to use more AMD instances, and more.

Show notes are below the embedded video, as always.

One of our most popular videos of yore talks about the GTX 960 4GB vs. GTX 960 2GB cards and the value of choosing one over the other. The discussion continues today, but is more focused on 3GB vs. 6GB comparisons, or 4GB vs. 8GB comparisons. Now, looking back at 2015’s GTX 960, we’re revisiting with locked frequencies to compare memory capacities. The goal is to look at both framerate and image quality to determine how well the 2GB card has aged versus how well the 4GB card has aged.

A lot of things have changed for us since our 2015 GTX 960 comparison, so these results will obviously not be directly comparable to the time. We’re using different graphics settings, different test methods, a different OS, and much different test hardware. We’ve also improved our testing accuracy significantly, and so it’s time to take all of this new hardware and knowledge and re-apply it to the GTX 960 2GB vs. 4GB debate, looking into whether there was really a “longevity” argument to be made.

NVIDIA’s GTX 1650 was sworn to secrecy, with drivers held for “unification” reasons up until actual launch date. The GTX 1650 comes in variants ranging from 75W to 90W and above, meaning that some options will run without a power connector while others will focus on boosted clocks, power target, and require a 6-pin connector. GTX 1650s start at $150, with this model costing $170 and running a higher power target, more overclocking headroom, and potentially better challenging some of NVIDIA’s past-generation products. We’ll see how far we can push the 1650 in today’s benchmarks, including overclock testing to look at maximum potential versus a GTX 1660. We’re using the official, unmodified GTX 1650 430.39 public driver from NVIDIA for this review.

We got our card two hours before product launch and got the drivers at launch, but noticed that NVIDIA tried to push drivers heavily through GeForce Experience. We pulled them standalone instead.

EA's Origin launcher has recently gained attention for hosting Apex Legends, one of the present top Battle Royale shooters, but is getting renewed focus as being an easy attack vector for malware. Fortunately, an update has already resolved this issue, and so the pertinent action would be to update Origin (especially if you haven't opened it in a while). Further news this week features the GTX 1650's rumored specs and price, due out allegedly on April 23. We also follow-up on Sony PlayStation 5 news, now officially confirmed to be working with a new AMD Ryzen APU and customized Navi GPU solution.

Show notes below the embedded video, for those preferring reading.

We’re still in China for our factory and lab tours, but we managed to coordinate with home base to get enough testing on the GTX 1660 done that a review became possible. Patrick ran the tests this time, then we just put the charts and script together from Dongguan, China.

This is a partner launch, so no NVIDIA direct sampling was done and, to our knowledge, no Founders Edition board will exist. Reference PCBs will exist, as always, but partners have control over most of the cooler design for this launch.

Our review will look at the EVGA GTX 1660 dual-fan model, which has an MSRP of $250 and lands $30 cheaper than the baseline GTX 1660 Ti pricing. The cheapest GTX 1660s will sell for about $220, but our $250 unit today has a higher power target allowance for overclocking and a better cooler. The higher power target is the most interesting, as overclocking performance can stretch upwards toward a GTX 1660 Ti at the $280 price-point.

We’ll get straight to the review today. Our focus will be on games, with some additional thermal and power tests toward the end. Again, as a reminder, we’re doing this remotely, so we don’t have as many non-gaming charts as normally, but we still have a complete review.

Our initial AMD Radeon VII liquid cooling mod was modified after the coverage went live. We ended up switching to a Thermaltake Floe 360 radiator (with different fans) due to uneven contact and manufacturing defects in the Alphacool GPX coldplate. Going with the Asetek cooler worked much better, dropping our thermals significantly and allowing increased overclocking and stock boosting headroom. The new drivers (19.2.3) also fixed most of the overclocking defects we originally found, making it possible to actually progress with this mod.

As an important foreword, note that overclocking with AMD’s drivers must be validated with performance at every step of the way. Configured frequencies are not the same as actual frequencies, so you might type “2030MHz” for core and get, for instance, 1950-2000MHz out. For this reason, and because frequency regularly misreports (e.g. “16000MHz”), it is critical that any overclock be validated with performance. Without validation, some “overclocks” can actually be bringing performance below stock while appearing to be boosted in frequency. This is very important for overclocking Radeon VII properly.

Page 1 of 32

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge