AMD announced Monday their upcoming plans for "SkyBridge," due to come out in 2015, and customized AMD/ARM cores ("K12") due in 2016. The 2014 road map AMD laid-out is promising across mobility and x86 applications (more on the latter in a future post). AMD is the first company that is minimizing costs by having a single motherboard for both ARM and x86 architecture, giving more potential reach for consumers and sticking with AMD's effort to let buyers keep their board upon upgrading CPUs. AMD has always owned a bit of a niche market (except the days when they dominated with the Athlon 64), but they are expanding their strategy to one of differentiation from the norms of semiconductors.
AMD's Mantle had a rocky unveil with Battlefield 4 and has seen fierce attempts at invalidation by nVidia, but the company continues to plow through difficulties. In the face of DirectX 12 -- still some 20 months out -- Mantle has just announced its partnership with "40 unique development studios pre-registered for private beta" of their Mantle SDK. Developers can use a new portal to access information pertinent to Mantle.
NVIDIA and AMD constantly go tit-for-tat on GPU + game bundles, each attempting to add that extra bit of value to sway buyers of new cards. How much that effects buying decisions is questionable, but I won't fight free games.
Anyway, we're keeping this one a lot quicker than previous AMD press conference write-ups. Here's the slide you care about:
It’s no secret that AMD recently has been posting losses. In fact, just two years ago, AMD reported massive losses of about $1 billion in an earnings report. This was the catalyst for layoffs and organizational “restructuring.” AMD’s (NASDAQ: AMD) large losses were not unique, though -- both Intel and AMD saw their stock price plummet in 4Q12. Rumors of PC death abounded, but the story wasn’t over quite yet.
In 2013, AMD had much lower losses of about $83 million. AMD may have overall losses this quarter (and already predicted them) but their $1.4B revenue for this quarter is a 28% increase over last year. Losses are down 86% from this time last year and the company even beat out analysts’ predictions; much of this can be attributed to the growth of APUs and console deals, though the largest portion of AMD’s profit comes from its GPU division. Intel (NASDAQ: INTC) also recently reported that they had a revenue of $1.9B for 1Q14 that, when compared to the 4Q13 revenue of $2.6B, seems numerically bad, though it still beat out analyst predictions for revenue and is above par for this part of the year.
We've been following Star Citizen fairly extensively since its 2012 campaign. As journalists, part of the job is "discovering" games before they make it big; I always task writers with dedicating some portion of our time at PAX to discovering indie games, the hope being that one goes mainstream after we've made it in the door early. I vividly remember Star Citizen hitting the $800,000 mark on Kickstarter and feeling like I'd missed the boat for journalistic success -- it was at the height of its campaign and everyone else had already started talking about it. Even still, we linked up with CIG CEO & Chairman Chris Roberts to discuss technology in-depth (lots of hardware conversation in that link), which had been entirely unexplored up until that point. It's still one of my favorite articles I've worked on, and much of that content remains relevant through today. Funny how much I've learned since then, too.
Months later, we caught up with Roberts at PAX East 2013 shortly before a discussion panel (filmed). Fast forward to July, and we found ourselves at the Cloud Imperium Games office in Santa Monica. At this point, Roberts' next major goal was $21 million; that'd allow him the freedom of ditching private investors in favor of crowd-sourcing the entire game, he told us, and it was no longer a pipe dream to do so. Everyone in the room knew the funding target was on the horizon, it was just a matter of when. I don't think any of us could have told you that Star Citizen would be sitting at $42 million -- more than double our July meeting -- less than a year later.
In late 2013, AMD came out with their new GPU series that included the R9 290 and R9 290X, both of which ran quite loudly and at 95C. This has led to a plethora of custom coolers for these two cards being released by third-party vendors. One of the most unexpected being VisionTek’s CryoVenom, a liquid-cooled 290 for $600.
Almost immediately after our press conference with nVidia concluded -- the one directly challenging the validity of Mantle -- AMD contacted us for a discussion on their R9 295X2 video card. The R9 295X2 has been spoiled for quite a while in traditional AMD marketing fashion, namely by sending extremely flattering photos to some of the major tech outlets. As of last week's call, we were able to get the full Radeon R9 295X2 specs, including TDP, fab process, memory & buses, and details on the difficulty with PSU support.
Let's get right to it with this one.
In my many years working on the journalism side of this industry, I've never seen nVidia put forth such an aggressive stance as exhibited during last week's press conference. We'll start this post with some rapid-fire catching up from the last few months.
The past months have been very AMD-intensive. AMD's Mantle API fronted momentous marketing outreach, touting a bypass to DirectX's performance overhead that has historically been a drain on CPU and GPU output. The hardware has been held back by the API, we were (somewhat accurately) told by AMD, and Mantle was the proposed solution to put developers "closer to the metal" -- or closer to the hardware-level -- similar to console development. This news came to be during a period of silence for Microsoft's DirectX API, which hadn't seen noteworthy development since 2012 with Dx11.1 (at that point). It was the ideal opportunity for an emergent API to make a big splash without significant, refreshed competition.
After offering reddit's computer hardware & buildapc sub-reddits the opportunity to ask us about our nVidia GTC keynote coverage, an astute reader ("asome132") noticed that the new Pascal roadmap had a key change: Maxwell's "unified virtual memory" line-item had been replaced with a very simple, vague "DirectX 12" item. We investigated the change while at GTC, speaking to a couple of CUDA programmers and Maxwell architecture experts; I sent GN's own CUDA programmer and 30+ year programming veteran, Jim Vincent, to ask nVidia engineers about the change in the slide deck. Below includes the official stance along with our between-the-lines interpretation and analysis.
In this article, we'll look at the disappearance of "Unified Virtual Memory" from nVidia's roadmap, discuss an ARM/nVidia future that challenges existing platforms, and look at NVLink's intentions and compatible platforms.
(This article has significant contributions from GN Staff Writer & CUDA programmer Jim Vincent).
In a somewhat tricksy move today, AMD hosted a press conference a couple of miles from nVidia’s active GTC event going on down the road. In yesterday’s keynote by nVidia CEO Jen-Hsun Huang, we saw the introduction of the new Titan Z video card, Pascal architecture, machine learning, and other upcoming GPU technologies. Now, less than 24 hours later, AMD has invited us by to look at their new high-end workstation solution – the W9100 FirePro GPU.
The presentation was pretty quick compared to what we got with nVidia, but the primary focus was on computationally-intensive OpenCL tasks, real-time color correction and editing playback in full 4K resolution, and “enabling content creation.”
Let’s start with the obvious.
We moderate comments on a ~24~48 hour cycle. There will be some delay after submitting a comment.