AMD issued a preemptive response to nVidia's new GTX 1050 and GTX 1050 Ti, and they did it by dropping the RX 460 MSRP to $100 and RX 470 MSRP to $170. The price reduction's issuance is to battle the GTX 1050, a $110 MSRP card, and GTX 1050 Ti, a $140-$170 card. These new Pascal-family devices are targeted most appropriately at the 1080p crowd, where the GTX 1060 and up were all capable performers for most 1440p gaming scenarios. AMD has held the sub-$200 market since the launch of its RX 480 4GB, RX 470, and RX 460 through the summer months, and is just now seeing its competition's gaze shift from the high-end.
Today, we've got thermal, power, and overclocking benchmarks for the GTX 1050 and GTX 1050 Ti cards. Our FPS benchmarks look at the GTX 1050 OC and GTX 1050 Ti Gaming X cards versus the RX 460, RX 470, GTX 950, 750 Ti, and 1060 devices. Some of our charts include higher-end devices as well, though you'd be better off looking at our GTX 1060 or RX 480 content for more on that. Here's a list of recent and relevant articles:
Part 1 of our interview with AMD's RTG SVP & Chief Architect went live earlier this week, where Raja Koduri talked about shader intrinsic functions that eliminate abstraction layers between hardware and software. In this second and final part of our discussion, we continue on the subject of hardware advancements and limitations of Moore's law, the burden on software to optimize performance to meet hardware capabilities, and GPUOpen.
The conversation started with GPUOpen and new, low-level APIs – DirectX 12 and Vulkan, mainly – which were a key point of discussion during our recent Battlefield 1 benchmark. Koduri emphasized that these low-overhead APIs kick-started an internal effort to open the black box that is the GPU, and begin the process of removing “black magic” (read: abstraction layers) from the game-to-GPU pipeline. The effort was spearheaded by Mantle, now subsumed by Vulkan, and has continued through GPUOpen.
AMD sent us an email today that indicated a price reduction for the new-ish RX 460 2GB card and RX 470 4GB card, which we've reviewed here (RX 460) and here (RX 470). The company's price reduction comes in the face of the GTX 1050 and GTX 1050 Ti release, scheduled for October 25 for the 1050 Ti, and 2-3 weeks later for the GTX 1050. Our reviews will be live next week.
We've already extensively looked at the GTX 1060 3GB vs. GTX 1060 6GB buying options, we covered the RX 480 4GB vs. 8GB options, but we haven't yet tested the 3GB & 4GB SKUs head-to-head. In this content, we're using the latest drivers to specifically benchmark the GTX 1060 3GB versus the RX 480 4GB cards to determine which has the best framerate for the price.
Each of the lower VRAM spec SKUs has a few other tweaks in addition to its memory capacity reduction. The GTX 1060 3GB, for instance, also eliminates one of its SMs. In turn, that kills 128 CUDA cores and 8 TMUs, dragging the 1060 down from 1280 cores / 80 TMUs to 1152 cores / 72 TMUs on the GTX 1060 3GB model. AMD's RX 480 4GB card, meanwhile, has a lower minimum specification for memory to assist in cost management. The RX 480 4GB has a minimum memory speed of ~1750MHz (or ~7Gbps effective), whereas the RX 480 8GB model runs 2000MHz (8Gbps effective).
Abstraction layers that sit between the game code and hardware create transactional overhead that worsens software performance on CPUs and GPUs. This has been a major discussion point as DirectX 12 and Vulkan have rolled-out to the market, particularly with DOOM's successful implementation. Long-standing API incumbent Dx 11 sits unmoving between the game engine and the hardware, preventing developers from leveraging specific system resources to efficiently execute game functions or rendering.
Contrary to this, it is possible, for example, to optimize tessellation performance by making explicit changes in how its execution is handled on Pascal, Polaris, Maxwell, or Hawaii architectures. A developer could accelerate performance by directly commanding the GPU to execute code on a reserved set of compute units, or could leverage asynchronous shaders to process render tasks without getting “stuck” behind other instructions in the pipeline. This can't be done with higher level APIs like Dx 11, but DirectX 12 and Vulkan both allow this lower-level hardware access; you may have seen this referred to as “direct to metal,” or “programming to the metal.” These phrases reference that explicit hardware access, and have historically been used to describe what Xbox and Playstation consoles enable for developers. It wasn't until recently that this level of support came to PC.
In our recent return trip to California (see also: Corsair validation lab tour), we visited AMD's offices to discuss shader intrinsic functions and performance acceleration on GPUs by leveraging low-level APIs.
This week's news announcements include AMD AM4 Zen chipset naming (rumors, technically), NZXT's new RGB LED 'Aer' fans, and a pair of cases from Rosewill and Cooler Master.
AMD's initial AM4 chipset announcement was made at PAX, where the B350, A320, and XBA300 chipsets were announced for mainstream and low-end Gen 7 APUs. The high-end Zen chipset for Summit Ridge was concealed during this announcement, but is now known to be the X370 platform.
X370 will ship alongside the Summit Ridge CPUs and will add to the lanes available for high-speed IO devices, mostly SATA and new generation USB. Most of the IO with the Zen architecture will be piped through the CPU itself, with what remains of the chipset acting more as an IO controller than a full chipset.
The AMD Gen 7 APUs and AM4 platform have officially begun shipment in some OEM systems this weekend, primarily through OEMs at physical retail locations. AMD's launch includes entry-level and mainstream AM4 chipsets, promising the high-end Zen chipset (990FX equivalent) at a later date. AM4 platform shipment begins with the B350, A320, and X/B/A300 chipsets in accompaniment with the A12-9800 and down.
Let's run through the new Gen7 APU finalized specs first, then talk AM4 chipset specs. Note that the new AM4 motherboards are making major moves to unify the FM and AM platforms under AMD's banner, so Zen's FX line equivalent and the Gen7 APUs will both function on the same motherboard. The below table (following the embedded video) provides the specs for the A12-9800, X4 950, and other relevant chips:
We've been working with AMD since our RX 480 & RX 470 reviews to troubleshoot some driver-related screen hangs. In our July notice, we strongly encouraged that our readers on RX 400 series cards avoid the 16.7.3 drivers, following instability and file corruption on one of our test benches. Since then, we've been in correspondence with AMD on the issue, and finally have some good news for folks who've encountered the green / black / blue / yellow screen hangs with AMD drivers.
Despite AMD’s FreeSync arriving later than nVidia’s G-Sync, FreeSync has seen fairly widespread adoption, especially among gaming monitors. The latest monitor – and the 101st – to officially support FreeSync is Lenovo’s Y27f. This also marks the announcement of Lenovo’s first FreeSync monitor.
While Intel's Developer Forum is underway in San Francisco, not far from AMD in Sunnyvale, the x64 creators held a press conference to demonstrate Zen CPU performance. Based strictly on the presentation, AMD shows a 40% IPC (Instructions Per Clock) over Vishera. The demonstration used a 16T processor, the “Summit Ridge” chip that's been discussed a few times, which runs 8 cores with simultaneous multi-threading (SMT) for 16 total threads. For the non-gaming market, CPU codename “Naples” was also present, a 32C/64T Zen server processor in a dual-CPU Windows server.
AMD detailed more of the Zen architecture in an official capacity, commenting on new caching routines and branch prediction, accompanied by the SMT changes that shift AMD away from its modular Bulldozer architecture. AMD made mention of “fanless 2-in-1s” in addition to high-performance CPUs and embedded systems.