Logitech G213 Prodigy Membrane Keyboard Review

By Published October 27, 2016 at 4:46 pm

Whenever we get a new keyboard to review, we make a point to put away the regularly used keyboards. It’s easy to gravitate toward what we’re familiar with, and so those things must be put aside for the review. Oftentimes, putting away the usual keyboards is easy since we have worked with a number of good releases lately, but sometimes it’s not so trivial.

Frankly, we expected the latter situation when unboxing the Logitech G213 Prodigy ($70). It’s a rubber dome keyboard, and those don’t get quite the fanfare that mechanical boards do. Setting the keyboard up revealed inclusion of RGB lighting, fully functional media keys, and a tuned force profile on the switches. The G213 also positions itself at a $70 “budget” price-point for an RGB board, but we’ll talk more about that later.

The Titan X Hybrid mod we hand-crafted for a viewer allowed the card to stretch its boost an additional ~200MHz beyond the spec. This was done for Sam, the owner who loaned us the Titan XP, and was completed back in August. We also ran benchmarks before tearing the card down, albeit on drivers from mid-August, and never did publish a review of the card.

This content revisits the Titan XP for a review from a gaming standpoint. We'd generally recommend such a device for production workloads or CUDA-accelerated render/3D work, but that doesn't stop that the card is marketed as a top-of-the-line gaming device with GeForce branding. From that perspective, we're reviewing the GTX Titan X (Pascal) for its gaming performance versus the GTX 1080, hopefully providing a better understanding of value at each price-point. The Titan X (Pascal) card is priced at $1200 from nVidia directly.

Review content will focus on thermal, FPS, and overclocking performance of the GTX Titan X (Pascal) GP102 GPU. If you're curious to learn more about the card, our previous Titan XP Hybrid coverage can be found here:

AMD issued a preemptive response to nVidia's new GTX 1050 and GTX 1050 Ti, and they did it by dropping the RX 460 MSRP to $100 and RX 470 MSRP to $170. The price reduction's issuance is to battle the GTX 1050, a $110 MSRP card, and GTX 1050 Ti, a $140-$170 card. These new Pascal-family devices are targeted most appropriately at the 1080p crowd, where the GTX 1060 and up were all capable performers for most 1440p gaming scenarios. AMD has held the sub-$200 market since the launch of its RX 480 4GB, RX 470, and RX 460 through the summer months, and is just now seeing its competition's gaze shift from the high-end.

Today, we've got thermal, power, and overclocking benchmarks for the GTX 1050 and GTX 1050 Ti cards. Our FPS benchmarks look at the GTX 1050 OC and GTX 1050 Ti Gaming X cards versus the RX 460, RX 470, GTX 950, 750 Ti, and 1060 devices. Some of our charts include higher-end devices as well, though you'd be better off looking at our GTX 1060 or RX 480 content for more on that. Here's a list of recent and relevant articles:

This episode of Ask GN focuses on some more technical topics – and ones which we were happy to address in more episodic fashion. The first question asked us to address why a framerate in excess of the refresh rate appears to allow smoother gameplay. It's a good question, too; it'd seem like the excess doesn't actually add anything, if just thinking about a 60Hz target, but there's a lot more to it than that.

This topic leads to discussion on monitor overclocking and VRAM measurements and consumption. We've previously covered monitor overclocking, for the curious, and we've also previously talked about how GPU-Z doesn't accurately measure VRAM consumption. It's more a measurement of VRAM requested by the game, and because some games will just ask for all the memory, it's hard to know how much is actually utilized (rather than reserved). We talk about that more in the content below:

It’s a great weekend for buying graphics cards, as AMD is dropping prices in preparation for the release of the GTX 1050. Today is the day those price drops take effect, although third-party manufacturers may drag their feet a bit. Some AIB partners are already offering discounts on the RX 460 and RX 470 cards, especially Sapphire.

SAPPHIRE Radeon RX 460 2GB ($100): MSRP for the RX 460 is now $100, and Sapphire Tech has wasted no time dropping their card’s price to match. This sale only lasts a few more hours, but it’s a safe bet that the price become permanent in the near future, making this a safe buy for budget builds. We reviewed the RX 460 here.

Part 1 of our interview with AMD's RTG SVP & Chief Architect went live earlier this week, where Raja Koduri talked about shader intrinsic functions that eliminate abstraction layers between hardware and software. In this second and final part of our discussion, we continue on the subject of hardware advancements and limitations of Moore's law, the burden on software to optimize performance to meet hardware capabilities, and GPUOpen.

The conversation started with GPUOpen and new, low-level APIs – DirectX 12 and Vulkan, mainly – which were a key point of discussion during our recent Battlefield 1 benchmark. Koduri emphasized that these low-overhead APIs kick-started an internal effort to open the black box that is the GPU, and begin the process of removing “black magic” (read: abstraction layers) from the game-to-GPU pipeline. The effort was spearheaded by Mantle, now subsumed by Vulkan, and has continued through GPUOpen.

The goal of this content is to show that HBAO and SSAO have negligible performance impact on Battlefield 1 performance when choosing between the two. This benchmark arose following our Battlefield 1 GPU performance analysis, which demonstrated consistent frametimes and frame delivery on both AMD and nVidia devices when using DirectX 11. Two of our YouTube commenters asked if HBAO would create a performance swing that would favor nVidia over AMD and, although we've discussed this topic with several games in the past, we decided to revisit for Battlefield 1. This time, we'll also spend a bit of time defining what ambient occlusion actually is, how screen-space occlusion relies on information strictly within the z-buffer, and then look at performance cost of HBAO in BF1.

We'd also recommend our previous graphics technology deep-dive, for folks who want a more technical explanation of what's going on for various AO technologies. Portions of this new article exist in the deep-dive.

AMD sent us an email today that indicated a price reduction for the new-ish RX 460 2GB card and RX 470 4GB card, which we've reviewed here (RX 460) and here (RX 470). The company's price reduction comes in the face of the GTX 1050 and GTX 1050 Ti release, scheduled for October 25 for the 1050 Ti, and 2-3 weeks later for the GTX 1050. Our reviews will be live next week.

Battlefield 1 marks the arrival of another title with DirectX 12 support – sort of. The game still supports DirectX 11, and thus Windows 7 and 8, but makes efforts to shift Dice and EA toward the new world of low-level APIs. This move comes at a bit of a cost, though; our testing of Battlefield 1 has uncovered some frametime variance issues on both nVidia and AMD devices, resolvable by reverting to DirectX 11. We'll explore that in this content.

In today's Battlefield 1 benchmark, we're strictly looking at GPU performance using DirectX 12 and DirectX 11, including the recent RX 400 series, GTX 10 series, GTX 9 series, and RX 300 series GPUs. Video cards tested include the RX 480, RX 470, RX 460, 390X, and Fury X from AMD and the GTX 1080, 1070, 1060, 970, and 960 from nVidia. We've got a couple others in there, too. We may separately look at CPU performance, but not today.

This BF1 benchmark bears with it extensive testing methodology, as always, and that's been fully detailed within the methodology section below. Please be sure that you check this section for any questions as to drivers, test tools, measurement methodology, or GPU choices. Note also that, as with all Origin titles, we were limited to five device changes per game code per day (24 hours). We've got three codes, so that allowed us up to 15 total device tests within our test period.

The Nintendo “Switch” was announced this morning, the next-generation half-portable, half-docked console. To reduce confusion, the Switch was previously referred to as the Nintendo “NX.” It is the same device.

Nintendo's new Switch is built in partnership with nVidia and leverages the Pascal architecture found in current-generation GTX 10-series GPUs. At least, based on this text from nVidia's blog: "[...] NVIDIA GPU based on the same architecture as the world’s top-performing GeForce gaming graphics cards." Tegra SOCs include ARM processors alongside the nVidia graphics solution, and also host all of the I/O lanes and memory interfaces. This is a complete system, as indicated by “system on chip.” We've asked nVidia for details on which ARM devices are used and which memory will be supported, but were told that the company is not revealing further details on Nintendo's product. We are awaiting comment from Nintendo for more information.

We do know that the Tegra SOC is accelerating gameplay with hardware-acceleration for video playback, and that nVidia and Nintendo have deployed “custom software for audio effects and rendering.” We can confidently speculate that the Switch is not functioning as the previous Shield devices have (read: not streaming to handheld from a dock), mostly because the Switch is large enough to contain all necessary render hardware within its handheld state. The Switch is also shown in the advert to be playable on planes, which most certainly do not have fast enough internet to support up/down game streaming. This is processing and rendering locally.

Page 1 of 116

  VigLink badge