TechRAID3: Maxwell 750 Ti, 16-Core AMD CPU, G-Sync vs. FreeSync

By Published January 20, 2014 at 12:32 pm
  •  

In our third episode of TechRAID -- our video series dedicated to rounding-up and explaining the week's news stories -- we turn to coverage of video hardware, power supplies, and a new CPU. This week's news topics include 80 Plus Titanium, nVidia's rumored Maxwell 750 Ti February 18th release date, a new 16-core AMD CPU that could turn into an FX processor, and G-Sync vs. FreeSync technologies in the display market.

We just got back from CES, so the news was overwhelming and more plentiful than time for the video; be sure to check out our other CES coverage (and video coverage) for yet more hardware knowledge!

Here are the related links for this episode:

  • Ecova officiates 80 Plus Titanium [Plugloadsystems].
  • New 16-core AMD CPU detailed [GamersNexus].
  • Rumored nVidia Maxwell Launch [sweclockers].
  • NVidia G-Sync Explanation [GamersNexus].
  •  

    Relevant images:

    That concludes this episode. As always, a rough transcript of the video (what I read while producing it) is available below. Please excuse any typographical errors, as it is not proofread prior to rapid-firing the words out of my mouth.

    - Steve "Lelldorianx" Burke.

    TechRAID3 Transcript 

    Hey everyone, this is Steve from GamersNexus.net and we are back with our third episode of TechRAID, our video round-up series of all the latest hardware news. Topics for this episode include NVIDIA's impending Maxwell launch, a new 16-core AMD CPU, AMD's FreeSync and nVidia's G-Sync, 80 Plus Titanium, and a CES wrap-up.

    Let's start simple. This week saw the approval and implementation of a new 80 Plus Certification level by Ecova, the group overseeing the efficiency certification standard. 80 Plus Titanium has been around since about 2012, when Dell worked with Delta Electronics to produce the first Titanium-class server power supply; we saw Titanium-efficiency PSUs for the consumer market being developed in mid-2013, but the standard only officially got approved a few days ago. The 80 Plus Titanium standard has a 90% efficiency at 10% load, which is pretty damn remarkable for a power supply to achieve, and boasts a 94% efficiency at 50% load, versus 80 Plus Platinum's 92% efficiency at 50% load.

    If you're in the market for a system with an extended uptime or a server box, Titanium is worth looking into.

    Moving on to CPUs, AMD's got some news to share regarding a new server processor; the reason I'm mentioning it here is because a lot of server technology finds its way down the supply chain to consumers, so it could be relevant to the future of enthusiast-class CPUs, should AMD continue to make them. The new sixteen-core Steamroller CPU isn't yet branded with marketing labels, and so is operating under the name of "Family 15h Model 30H-3FH," with multiple CPUs in the same family -- essentially named 30H-4Fh, or 20H, and so on. The 30h-3Fh model is what we're interested in.

    Existing AMD Opteron server CPUs claim to offer sixteen cores, but they do so by using two CPUs on the same substrate. With the new Steamroller chip, we'll see a single silicon die housing a single CPU, which houses eight dual-core modules, for a total of sixteen cores. For those shaking heads at the core count, you have to keep in mind that server implementations do tend to actually utilize all the cores -- especially when taking virtualization and blades into account. The 30h-3Fh CPU makes moves to integrate the PCI-e 3.0 controller to the CPU itself, furthering the movement of motherboard components to the die; this action reduces latency between the CPU and the controller by shortening distance of travel and eliminating other buses.

    Further, the CPU consists of five I/O links, including one for PCI-e 3.0, 2 for HyperTransport, and 2 more that can server as either PCI-e 3.0 or coherent HyperTransport. The 'Coherent' prefix is indicative of symmetric multiprocessing, which is pretty standard in nearly every modern CPU. The significant implication here is that the GPU component -- if there even is one -- is not really being emphasized. AMD's development materials are targeted toward the CPU component of the new processor, so I could see some of this tech eventually turning into an FX product or something similar. We likely won't see it for at least a year, but probably two. The CPU is not in fab yet, which means it could be fabricated on a 20nm-class architecture.

    Moving into video hardware, we turn now to news of nVidia's Maxwell hardware, as posted by the SweClockers website. SweClockers claims that a source close to video card manufacturers has informed them of an impending 750 Ti GPU running Maxwell for February 18th. I don't have any way to validate if this is going to happen, but let's suppose it is for sake of argument.

    Maxwell is the next step after Kepler and aims to make great strides in memory efficiency. By utilizing the existing CUDA instruction set -- which allows programmers to eliminate explicit copy/export functions between the GPU RAM and the system RAM, among other things -- Maxwell is able to further unify memory addressing. The significance is greater efficiency and reduced overhead, something that both AMD and nVidia have been working to improve, meaning the cards will be able to leverage more of their power for the functions you care about. Similar strides are being taken with AMD technologies, like the Mantle API. Our hardware greatly overpowers what most software demands, so we're in an optimization stage right now.

    A 750 Ti is an interesting choice, given that most architecture launches begin with a flagship card, but this is still speculation at this point. The 750 Ti would supersede the existing 650 Ti Boost, likely landing it in the $150 to $180 range, depending on memory capacity.

    Finally, we look at the new display synchronization technology. We recently interviewed nVidia at CES -- link below -- to discuss how G-Sync actually works. The very same day, AMD announced its FreeSync technology -- a somewhat humorous slam against nVidia's costly technology. Let's quickly discuss the similarities and differences:

    First of all, both technologies aim to do the same thing: Eliminate tearing and/or stuttering that is introduced by asynchronous frame delivery between the video card and the display's native refresh rate. With a fixed refresh rate of 60 Hz or 120 Hz, your display is expecting a new frame to be prepared for it every 16ms or 8ms. The GPU does the frame rendering, sends it to the display, and the display presents it to the user. The problem is that when the GPU misses its 16ms window for a more complex scene, we either get tearing or stuttering, depending on if V-Sync is on or not. With V-Sync off, the display outputs the frame as it is rendered, meaning that we can end up with tearing -- or the instance of multiple frames being rendered on-screen simultaneously. This means that you'll see, as the name suggests, a tear between your objects. They won't line up. It's not something you think about much until you see the smoothness of AMD and nVidia's new technologies.

    With V-Sync on, we sacrifice tearing in favor of stutter. This is generally inadvisable for most gamers -- especially competitive gamers -- because we can mentally compensate for tearing better than we can for a missed frame, which is missed data. A stutter occurs when the GPU misses its render window, and so the monitor displays the previous frame a second time. This generally is far more influential on competitive gameplay than tearing for largely obvious reasons.

    G-Sync and FreeSync make it so that the monitor slaves to the video card, so the display rate is now variable, meaning that the frequency will update as the video card dispatches new information. It results in far smoother gameplay. FreeSync does this using a _VBLANK attribute available in some monitors, all through software, meanwhile G-Sync does it with a hardware-accelerated module that lives on the display.

    We haven't yet had an extended hands-on with Freesync, but will update you as soon as possible.

    That's it for this TechRAID episode! Please remember to subscribe so you can keep up-to-date; hit the news article in the link below and check out our Patreon subscriber page if you'd like to support us in our endeavors.

    I'll see you all next time, peace!

    Steve Burke

    Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

    First world problems, Steve. First world problems.

    We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

    Advertisement:

      VigLink badge