Steve Burke

Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

The goal for today is to trick an nVidia GPU into drawing more power than its Boost 3.0 power budget will allow it. The theoretical result is that more power will provide greater clock stability; we won’t necessarily get better overclocks or bigger offsets, but should stabilize and flatline the frequency, which improves performance overall. Typically, Boost clock bounces around based on load and power budget or voltage. We have already eliminated the thermal checkpoint with our Hybrid mod, and must now help eliminate the power budget checkpoint.

This content piece is relatively agnostic toward nVidia devices. Although we are using an nVidia Titan V graphics card, priced at $3000, the same practice of shunt resistor shorting can be applied to a 1080 Ti, 1070, 1070 Ti, or other nVidia GPUs.

“Shunts” are in-line resistors that have a known input voltage, which ultimately comes from the PCIe connectors or PCIe slot. In this case, we care about the in-line shunt resistors for the PCIe cables. The GPU knows the voltage across the shunt (12V, as it’s in-line with the power connectors), and the GPU also knows the resistance from the shunt (5mohm). By measuring the voltage drop across the shunt, the GPU can figure out how much current is being pulled, and then adjust to match power limitations accordingly. The shunt itself is not a limiter or a “dam,” but a measuring stick used to determine how much current is being used to drive the card.

This content piece is video-centric, but we have a full-length feature article coming tomorrow -- and it's focused on shunt shorting, something we have spent the past few days playing around with. For today's, however, we point you toward our render rig's GPU diagnostics, where we pull a Maxwell Titan from the machine, try to determine why it's overheating, and show some CLC / AIO permeation testing in the process. Rather than weigh the loops, which makes no sense (given the different manufacturing tolerances for the radiators and pumps), we emptied two loops -- one new and one old -- to see if the older unit's liquid had permeated the tubes. If it had, then we'd measure less liquid in the older loop, showing that a year of heavy wear had caused the permeation. You can find out what happened in the video below.

The short of it is that, between the two loops, we saw no meaningful permeation -- we also noted that the pump impellers were still spinning, and that the thermal paste seemed fine. Our next steps will be to remount the CLC and test again.

Fortunately, this GTX 1060 isn't prepped for mass market or DIY consumer adoption -- we've got enough confusing naming as is. The GTX 1060 presently exists in 3GB and 6GB AICs, with the former also containing one fewer SM (or a 10% core reduction). There is also the lesser-known 1060 6GB card with boosted 9Gbps memory speeds, part of a refreshed effort by nVidia and its partners earlier this year. According to Chinese language website Expreview, a new GTX 1060 5GB card is allegedly planned for release in Asian markets, primarily targeted for use in internet cafes and PC bangs. We have not independently verified the story at this time.

From what the story indicates, it seems as if this particular GTX 1060 model will carry the original 1280 CUDA cores (as opposed to the 1152 FP32 lanes on the 1060 3GB), with the primary difference existing in a 1GB reduction to capacity and 160-bit memory interface.

This episode of Ask GN, shipping on Christmas day, answers a few pertinent questions from the last few weeks: We'll talk about whether we made ROI on the Titan V, whether it makes more sense to buy Ryzen now or wait for Ryzen+/Ryzen2, and then dive into the "minor" topics for the segment. Smaller topics include discussion on choosing games for benchmarking -- primarily, why we don't like ROTTR -- and our thoughts on warranty/support reviews, with some reinforced information on vertical GPU mounting. The conclusion focuses on an ancient video card and some GN modmat information.

The embedded video below contains the episode. Timestamps are below that.

The monstrosity shown in our latest GN livestream was what ranked us among the top-10 on several 3D Mark world benchmarks: It was a mixture of things, primarily including benefit of having high-end hardware (read: buying your way to the top), but also compensating for limited CPU OC skills with a liquid cooling mod. Our Titan V held high clocks longer than it had any business doing, and that was because of our Titan V Hybrid Mod.

It all comes down to Boost 3.0, as usual, and even AMD’s Vega now behaves similarly. The cards look at their thermal situation, with nVidia targeting 83-84C as a limiter, and then adjust clocks according to thermal headroom. This is also why there’s no hard guarantee on clock speed, because the card functionally “overclocks” (or “downclocks,” depending on perspective) itself to match its thermal budget. If we haven’t exceeded the thermal budget – achievable primarily with AIB partner coolers or with a liquid mod – then we have new budgets to abide to, primarily power and voltage.

firestrike no 6

We can begin solving for the former with shunt mods, something we’ve done and for which we’ll soon publish data, but we can’t do much more than that. These cards are fairly locked down, including BIOS, and we’re going to be limited to whatever hard mods we can pull off.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

Advertisement:

  VigLink badge