AMD Earnings: GPU Sales Decrease, CPU Increase
AMD has reported its earnings for the first quarter, and it looks like AMD notched another record quarter. AMD’s 1Q20 revenue came in at $1.79B, which is a 40% increase YoY, but a 16% decrease over last quarter. Notably, AMD expanded its footing in the mobile segment thanks to Ryzen 4000 APUs.
AMD’s Computing and Graphics segment came in at $1.44B, a 73% increase YoY, but down 13% QoQ. AMD attributes the decrease to lower graphics cards sales. Enterprise, Embedded and Semi-Custom segment revenue was at $348M, down 21% YoY. This was due to lower semi-custom sales, likely thanks to the winding down of the current generation of consoles. AMD says this was offset by Epyc sales.
For next quarter, AMD expects $1.85B in revenue, a 21% increase YoY. This forecast will be driven by demand for Ryzen and Epyc processor sales. AMD is also still positioning itself for aggressive double-digit growth in the server market for next quarter. AMD also remains on track for Zen 3 and RDNA 2 at some point later this year.
Intel Teases Xe Graphics Inside LGA Package
The Intel Graphics Twitter account tweeted an image of what would appear to be a massive Xeon at first glance. However, this is no Xeon. Of course, speculation abounds as to what exactly it is -- and why it's in a socketable LGA package -- but Intel later confirmed (via Raja Koduri) that it's a data center-targeted HP GPU.
Intel’s Xe Graphics architecture is set to encompass three segments: Xe LP (low power), Xe HP (high performance), and Xe HPC (high performance computing). Xe LP we believe will mostly be relegated to iGPU solutions, whereas Xe HP will address enthusiast and professional markets. Finally, Xe HPC will be aimed at data centers and supercomputers.
Raja Koduri made an interesting tweet about the mysterious chip. “The ‘baap of all’ is back, battle-fielding and b-floating.” For those wondering, “baap of all” is a codename of sorts that Intel India has bestowed upon Xe HP, and means “father of all”. We can take battle-feilding to mean a few things, gaming among them. It’s an odd tease, being that this particular GPU isn’t targeted at gaming. It appears to be an MCM-based design with 4x chiplets and 4x HBM2e stacks, best anyone can currently tell.
However, b-floating only means one thing -- the bfloat16 floating-point format.
Bfloat16 is for machine learning and AI acceleration, and Intel has been beefing up its support for bfloat16. The AVX-512 instruction set has BF16 extensions, and Intel has been slowly adopting them for certain product families, such as Cooper Lake-SP. So, “b-floating” is an indisputable reference to Intel’s HPC goals with Xe Graphics. Though, there also isn’t sufficient evidence to suggest it’s Intel’s Ponte Vecchio GPU, either.
As for the LGA package, it’s most certainly not LGA1200 or LGA115x. It’s possible that it could be using Intel’s LGA2066 or LGA3647, but those would just be guesses. It’s also possible that the GPU is an early sample, and the LGA package is just for convenience. We’ve certainly seen this before, where development units are socketable for rapid prototyping. As to whether this thing is Xe HPC or Xe HP, we’ll have to wait and see.
Ampere Inbound: GTC 2020 Keynote Slated For May 14th
It seems GTC 2020 is on again, after multiple attempts at scaling it down and postponing it in the wake of the current pandemic.
As of this writing (assuming things don’t change again), Nvidia will host its GTC keynote on May 14, 2020. The keynote will be delivered virtually via YouTube by Nvidia CEO Jensen Huang, presumably leather jacket and all.
“Get amped for the latest platform breakthroughs in AI, deep learning, autonomous vehicles, robotics and professional graphics,” says Nvidia. Note the particular use of the word “amped.” We can healthily speculate that Nvidia intends to announce its next-generation GPU architecture in the form of Ampere.
Ampere is the apparent successor to Turing, and has been heavily rumored to underpin the RTX 30-series cards.
Lenovo Will Offer Thinkpads With Fedora Workstation
It seems Lenovo is keen to join Dell in offering OEM options for Linux users. Lenovo and Red Hat have been working together to bring Fedora Workstation to select Lenovo Thinkpad models, and possibly more.
Specifically, Lenovo will offer the following models with Fedora Workstation pre-installed: Thinkpad P1 Gen2, Thinkpad P53, and Thinkpad X1 Gen8. As stated above, Lenovo is leaving room to further flesh this lineup out. Lenovo is also shipping the laptops with software from official Fedora repositories.
Lenovo seems to be trying to distinguish its Linux offerings from Dell’s by offering a different distro (Dell currently offers Ubuntu-based machines) and by also offering a larger swath of models supporting Linux. Either way, it’s good news for Linux users.
Toshiba Addresses SMR
Toshiba has finally decided to be more transparent about its use of SMR in its HDDs. This follows both Seagate and Western Digital fessing up to using SMR technology in their respective client HDDs and not being wholly upfront about it.
Of course, Toshiba’s comments are replete with their fair share of marketing fluff and defensive statements. “SMR technology is recognized as having an impact on write-speeds in drives where this technology is used, especially in the case of continuous random writing. For this reason, Toshiba products are carefully tailored to specific workloads and use cases,” says Toshiba.
TL;DR: We’d rather market our drives based on use cases and workloads defined by obscure internal benchmarks, rather than explicitly state that they use SMR and aren’t suitable for write-intensive applications. WD copped a similar tone before deciding to get a bit more humble. WD went on to claim it would update its marketing materials and documentation to properly identify which drives use SMR, as well as including benchmarks for those drives in customer facing documentation. To be clear, Toshiba has not said that. Neither has Seagate, for that matter.
However, not for nothing, Toshiba has disclosed exactly what drives use SMR -- and it’s a bit more than they let on originally. The following drives from Toshiba use SMR: P300 6TB, P300 4TB, DT02 6TB, DT02 4TB, DT02-V 6TB, DT02-V 4TB, L200 2TB, L200 1TB, MQ04 2TB, MQ04 1TB.
We originally said we weren’t hopeful that the three HDD makers would reverse course here, but it seems we may have spoken too soon. Although Toshiba and Seagate could at least match Westen Digital’s efforts, it’s good to see them all come clean, to an extent. It would be nice to see companies not engage in such nebulous tactics to begin with.
Researchers Suggest Measuring Semiconductors By Density
Semiconductor manufacturers have long used the minimum gate length of a transistor to identify a process node. As transistors have become ever smaller, the nanometer number attached to them has become more confusing, as they are not directly comparable; i.e., you can’t directly compare Intel’s 10nm to TSMC’s 7nm based on a node number. This is something we’ve talked about before, and even have a video about with David Kanter.
A recent research paper outlines a new metric for measuring semiconductors, one based on density rather than gate length. The paper is penned by 9 authors, including researchers at Berkeley, Stanford, MIT, the University of California, and one researcher from TSMC. The paper proposes that by using a standardized density metric for measuring the advancement of semiconductors, we can eliminate the confusing node naming, which in recent years has become equal parts marketing.
Intel floated a similar idea a few years ago, but there’s always been some debate over how to quantify the density of transistors. The research paper proposes measuring logic, memory, and connectivity.
“Thus, we propose the use of the following three-part number as a metric to gauge advancement of future semiconductor technologies: [DL , DM , DC ], where DL is the density of logic transistors (in #/mm2), DM is the bit density of main memory (currently the off-chip DRAM density, in #/mm2), and DC is the density of connections between the main memory and logic (in #/mm2),” reads the paper.
As the researchers frame it, the LMC density metric would be better suited to capture the progress in logic, memory, and packaging/integration technologies in a chip, which researchers claim have become decoupled from a simple node number -- and we’d agree with that sentiment. The LMC metric also doesn’t have to completely replace node numbers, but can supplement them.
“While companies may continue to use their preferred labels to market their technologies, the LMC density metric can serve as a common language to gauge technology advances among semiconductor manufacturers for their customers and other parties to facilitate clear communication. This metric accounts for the benefits that come from the integration of logic, memory, and connectivity into a system. In addition to being consistent with historical trends and our intuition about computing systems, the LMC density metric is applicable and extensible to future logic, memory, and packaging/integration technologies,” the researchers say.
The paper also points out that we currently have no meaningful way to identify the progress of chips once we sink below 1nm and run out of nanometer numbers. Thus, even more rationale for gauging chips by density rather than an arbitrary nanometer number.
Recapping Intel’s Comet Lake-S And Z490
We actually have a video on this topic, in addition to an article, both by Steve. We’d point readers to both for a more thorough overview, as well as sifting through some of Intel’s marketing. However, we’ll recap a few bullet point items here for those who regularly tune in to HW News.
- Yes, the names are just as bad as everyone feared. Yes, Intel’s marketing still needs work. Also, don’t confuse the i9-10900K with this i9-10900X. These are the times. Remember your Intel suffixes.
- Intel also effectively confirmed information GN presented weeks ago, in that Intel is bringing the silicon and substrate closer together. Intel is doing this by shaving 300 microns off of the silicon in an effort to facilitate quicker heat transfer to the IHS. The IHS is also thicker to offset the receding silicon underneath it. This should allow Comet Lake-S parts to retain the LGA115x clamping force for coolers/heatsinks, as LGA1200 is confirmed to support LGA115x coolers. Intel is also using its STIM, although we aren’t presently clear on what SKUs outside of K-SKUs, if any, get a soldered IHS. We’ll see.
- The extra 49 pins on the LGA1200 socket are for “improved power delivery and support for future incremental I/O features,” according to Intel. We take that to mean PCIe Gen4, and that it likely has something to do with the future Rocket Lake-S, which is rumored to have a greatly overhauled I/O.
- Intel is extending the same somewhat confusing Thermal Velocity Boost (TVB) it rolled out with Comet Lake-H to Comet Lake-S. With Comet Lake-H and S parts, Intel is using TVB to monitor the thermal threshold -- 65C for mobile parts, 70C for desktop parts -- and ratchet up the single-core turbo by two bins over the Turbo Boost 2.0 speed (5.1GHz to 5.3GHz), relative to where those chips are at thermally (above or below the thermal threshold) and if they are operating below their power budget. We’ll know more about this when we test the chips.
- The new i3 SKUs are the old i7 -- we told you so. Moreover, Intel has enabled hyper-threading all the way up and down the stack, rectifying the (heavily criticized) move to disable hyper-threading on i7 parts.
- Z490 will come with some interesting overclocking centric features we’ll be testing specifically, like the volt-frequency curve and per-core hyperthreading toggles.
Nvidia Builds A Cheap, Open-Source Ventilator
Nvidia’s Chief Scientist Bill Dally, has put together a design for a ventilator that is both low in cost and open-source. The design is aimed at being easy to assemble, and can be built for around $400. However, with open-source, 3D printed parts, the cost could get closer to $100.
Dally’s design is based around two components: a proportional solenoid valve and a microcontroller. Dally’s early prototype consisted of a solenoid valve, a microcontroller taken from a cheap computer, several common pipe fittings, and “several thousand lines of code.” That early design successfully inflated and deflated a rubber glove.
Now, Dally’s design has advanced to being successfully tested on a lung simulator, after seeking expert input from medical and engineering professionals. Dally is also seeking to gain emergency use authorization from the FDA, which will allow his design to move into the manufacturing phase.
Dally claims he can bolt the device’s components together in 5 minutes, and that the entire ventilator can be attached to a simple display and slid into a compact Pelican carrying case.
Editorial: Eric Hamilton
Host, Additional Reporting: Steve Burke
Video: Keegan Gallick