AMD: "FX is Not EOL" & Why What We Need in a CPU is Changing

By Published December 07, 2013 at 9:46 am

We recently posted about an alleged slide leak from AMD that, if real, seemed to suggest the end of the line for FX-series CPUs and the AM3+ socket. The slide stirred a great deal of concern throughout major social networks and enthusiast websites, and so I attempted to bring things back down to earth in our original analysis. I reached out to AMD for comment prior to publication, but we weren't able to speak with the company until yesterday.

AMD Manager of APU/CPU Product Reviews James Prior was quick to negate the slide's legitimacy: "I've never seen that slide before, I don't know where that came from," he told me in our call, and quickly followed-up by stating that "it's not real. FX is not end-of-life." Prior pointed-out that it's rare to ever see more than a year into the future with roadmaps, and that the real AMD roadmap looks like this:

 

amd-desktop-roadmap

For a comparison, here's the faked roadmap that led to the speculation about FX's alleged death:

amd-leaked-slide-fxThe "leaked" roadmap proved to be fake, but gave us a great discussion opportunity.

With that out of the way, we dug into the underlying concern: AM3+ has been around for a while. FX has been beaten up a good deal in its lifespan (sometimes fairly, sometimes not so much). With the APU / CPU adoption split around—judging from the Steam hardware survey and other aggregate sources—a 70% / 30% range (respectively), it's easy to see where enthusiast concern would arise, given the progression of APUs and IGP-enabled CPUs in the market. AMD's official statement on the matter of the FX-series as a product line, their current chipsets, and the AM3+ socket is below:

"AMD will continue to supply AM3+ and AMD FX processors for the foreseeable future, as per AMD's official roadmap update at APU'13 [above]. Recently, AMD launched the FX-9000 series, AMD's fastest desktop processor to date. As AMD's business continues to evolve, AMD will focus on the areas of growth including support for the desktop PC enthusiast leveraging AMD's world-class processor design IP, including heterogeneous compute. AMD's FX branded products will continue to evolve and we look forward to sharing those updates in the future."

To AMD's point, the FX-9370 is easily one of the highest-performing solutions on the market right now, especially considering its present $200 price-point.

And on the note of AMD's future updates, let's take this opportunity to discuss the future of game-hardware interactions.

Adapting to New Trends: What We Need in a CPU is Changing

The objective of this section of the article is to outline the game industry's trends and the hardware industry's facilitation of these trends; we're not telling you that AMD or Intel "do it better," and frankly, there's room for both philosophies in computing. Due to the recent surge in interest in AMD's approach to CPU and APU technology, the below will also present some of our own analysis of their vision.

It's well-known that AMD doesn't share the same direction for the future as Intel. Although both companies are pushing for lower TDP (relative to their earlier offerings) and integrated graphics technology, each has differentiating factors. The two manufacturers and their respective architectures are not always linearly comparable, which I think is an easy trap to fall into -- benchmarks and charts only run so deep. Development philosophy plays just as much of a role in functionality as raw performance.

Since Bulldozer's unveil 3-4 years ago, AMD has made it clear that it sees future software trending toward heavier multithreading utilization, more integer-intensive applications, and toward greater parallelization of processing. Bulldozer underscored that AMD doesn't see the same future in x86 architecture as Intel does.

These trends have taken shape in the likes of CryEngine 3, which natively supports up to eight simultaneous threads and spawns three threads by default (physics, rendering, game logic); FrostBite 3 works natively to more evenly distribute CPU load across the physical cores presented to it (and doesn't seem to discriminate against CPU manufacturer); and Unreal Engine 4, the newest 'AAA' engine, deploys new system architectures to more efficiently process intense physics / lighting calculations.

All three of these engines have put even greater focus on the ability to offload tasks to the GPU. This includes physics calculations, which can be sent to the APU's graphics component while the discrete card handles other tasks (with thanks to Dual Graphics and other AMD tech).

cry-engine-3-h4CryEngine 3 very intelligently splits the workload between the GPU & CPU, saturating both with high efficiency.

In professional software, we see Adobe and Sony integrating with GPU architectures for encoding, transcoding, and render applications. This support has slowly moved away from strictly professional video hardware (Quadro FX, FirePro cards) and toward accommodating more mainstream discrete processing units. Photoshop is another great example of software trending to better widespread system utilization; rather than place most of the load on a single component (like the CPU), Photoshop will—depending on the task executed—heavily consume RAM, CPU cycles, and GPU cycles for different functions. SSDs also see favor here, given the storage-intensive nature of Adobe's Scratch Disk utilization.

Even encryption protocols - which are perhaps the most notorious CPU cycle consumers of the early days - have been using GPUs for a while now. There's so much power in a GPU (and the parallel nature of processing) that it doesn't make sense to limit execution to a CPU.

Similarly, many of these applications are now doing integer-intensive crunching, so we see advantages in building additional INT units into a CPU.

The point is, what we want in a CPU is changing. Even as consumers, we've left the days of being able to ask for more cores and higher clocks, for more cache; we now need improved instruction sets, APIs, and innovative means to bypass OS- and application-layer bottlenecks (like DirectX). And, yes, even integrated graphics chips play a role in future architectures. With the modern desirability of GPUs for processing, and with Hybrid Graphics (effectively a pseudo-CrossFire), APUs will be finally put to work even in fairly high-end systems.

As far as needing new APIs and instruction sets, that's also pretty straight-forward: The CPU as a component is already incredibly high-powered as it pertains to any sort of mainstream application, so it's important to ensure that the software is able to fully utilize the potential of modern CPU design. Throwing more cores and higher frequency at a program won't matter if the program doesn't want to use it. Just look at task manager while running any game or application; chances are, CPU utilization across the cores is (still) fairly skewed toward the first two cores. Better optimization of the hardware (by the software) has significant importance going forward.

Look at it this way: You're moving a lot of furniture. You hire two large fellas to help with the moving process. If your instruction passed to these movers is that they each carry one item -- no matter how intense the workload is -- and that they're not to work together on larger items, then they're going to be inefficient. That inefficiency is not inherently the fault of the workers, but the fault of the instruction passed to them.

I discussed the importance of software optimization in my PS4 analysis and Star Citizen Technology overview.

These points are acknowledged by the development of AMD's new technologies: HSA (Heterogeneous System Architecture), hUMA (Heterogeneous Uniform Memory Access), Dual Graphics, Mantle, and the manner with which they've designed APUs (allocating nearly 40% of the die to the GPU component). As the tech continues to develop, we can see the vision unfolding and the paths of each advancement merging.

HSAAcceleratedProcessingUnitHSA allows the on-die GPU to work in unison with the CPU to process incoming tasks.

There's always going to be a place for CPUs in the truest sense: Server chips, from my viewpoint, have no need to integrate graphics (yet); high-end enthusiast CPUs from Intel (if they keep the line alive) are of a similar classification - there's no need for an IGP; AMD has now officially stated that they intend to continue development of FX for the foreseeable future. so even those might stick around. Traditional CPUs will be here for a while yet, but as the means to simultaneously leverage IGPs and CPUs advance, I think it'll start to become more desirable to forfeit some fraction of the die space for a GPU component.

AM3+ Stays For Now: The Time Isn't Right for a Refresh

All these discussion topics aside, we know that FX and AM3+ are here to stay for now. Even with criticism that AM3+ doesn't natively support PCI-e Gen3 interfaces (though the Gen2 x16 slots are often not fully saturated), it doesn't make sense to undergo a platform or chipset revamp right now.

DDR4 looms on the horizon. If AMD were to release a new platform implementing modern interfaces (and with it, a new line of boards), users upgrading would be caught in a "DDR trap." It makes more sense for AMD -- a company that has built a reputation around being the long-term support solution -- to wait until DDR4 has arrived, rather than do a small step-up before then for PCI-e & USB3.  

A Question to the Community: APUs and their Future 

As hardware enthusiasts, I'd love to hear what the community thinks on this: Would you be more willing to / interested in adopting APUs for your higher-end PC builds if they included more cores? L3 Cache? What about if they had less die space allocated to the GPU? Or would you always prefer a CPU-only solution?

- Steve "Lelldorianx" Burke.

Last modified on December 07, 2013 at 9:46 am
Steve Burke

Steve started GamersNexus back when it was just a cool name, and now it's grown into an expansive website with an overwhelming amount of features. He recalls his first difficult decision with GN's direction: "I didn't know whether or not I wanted 'Gamers' to have a possessive apostrophe -- I mean, grammatically it should, but I didn't like it in the name. It was ugly. I also had people who were typing apostrophes into the address bar - sigh. It made sense to just leave it as 'Gamers.'"

First world problems, Steve. First world problems.

We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.

  VigLink badge