AMD Feels The Server Recession, Too, But Growth Is Looming Large

With a server recession underway and its latest Epyc CPUs and Instinct GPU accelerators still ramping, this was a predictably soft, but still not terrible in the scheme of things, quarter for AMD. But the company is projecting that its datacenter business will still have somewhere around 50 percent growth in the second half of 2023 compared to the first half and that will make up for a lot of lost ground.

In the second quarter ended in June, AMD’s overall revenues were down 18.2 percent to $5.36 billion and thanks to investments in its various chip lines and lower volumes in the PC and gaming lines the company’s profits collapsed from $447 million in the year ago period (which was no great shakes in its own right) to a mere $27 million in the current quarter. And like Intel in the same quarter, AMD had to resort to using tax benefits to even get there. AMD had a $20 million operating loss, and it was $46 million in other income and a $23 million tax benefit that allowed AMD to post that $27 million gain.

AMD ended the quarter with $6.29 billion in cash and short term investments as the quarter ended, so it has plenty of cushion to get through a tough spot in the PC business and a slowdown in the datacenter business.

The Data Center group had sales of $1.32 billion in the quarter, down 11.1 percent, and operating income collapsed by 68.9 percent to $147 million, or 11.1 percent of revenue. (That’s a lot of ones, isn’t it?) The sales and profitability of the Data Center group were pretty much a carbon copy of what happened in the first quarter. In the first half of 2022, AMD’s Data Center group had $2.78 billion in revenues and $899 million in operating profit, which represented 32.3 percent of revenues. In the first half of 2023, when in theory AMD should have been minting coin with its “Genoa” Epyc 9004 processors launched in November last year, demand has been sluggish thanks to a stall in spending by the hyperscalers and cloud builders and skittishness by enterprises, and for 1H 2023, revenues are only $2.62 billion for the Data Center group, down 5.9 percent, and profitability at the operating level fell by 67.2 percent to a mere $295 million.

AMD and Wall Street keep talking about how the “El Capitan” supercomputer being built by Hewlett Packard Enterprise using AMD CPUs and GPUs will boost revenues by several hundred million dollars. This makes sense, given that the machine costs $500 million and the AMD motors represent the vast majority of the cost. But we don’t necessarily believe that AMD will make much profit off this deal, as is the long history of capability-class supercomputers sold to the national labs of the world. AMD will get to recognize this El Capitan revenue because HPE gets the parts to put into the machine, but HPE itself will have to go through a long acceptance process with Lawrence Livermore National Laboratory before it can in turn recognize the revenue it gets as the primary contractor for the building of the system. This will happen in 2024 for sure, but when depends on the benchmarks that Lawrence Livermore has set. As soon as the lab accepted the nodes, AMD can book the revenue for the “Antares” Instinct MI300A CPU-GPU hybrids.

On a call with Wall Street analysts, chief executive officer Lisa Su said that AMD was on track to launch and deliver the MI300A hybrid GPU-GPU engines and the MI300X standalone GPUs in the fourth quarter and touted the fact that Amazon Web Services, Microsoft Azure, Oracle Cloud, and Alibaba Cloud had all launched instances based on the Genoa CPUs. There are over 30 instances based on Genoa worldwide, and all told, there are 670 instances worldwide powered by AMD CPUs and by the end of the year the company projects there will be nearly 900 – with the bulk of those new instances being powered by Genoa chips. Revenues from the Genoa CPUs doubled sequentially from Q1 to Q2, and the addition of “Bergamo” and “Genoa-X” variants will help drive sales further, and the “Sienna” Epyc CPU for hyperscalers will launch later this quarter and join the mix. Su added that AMD expects sequential growth in the double digits – which obviously can be anywhere from 10 percent to 99 percent growth, so that is a pretty wide bracket – in Q3.

But the Genoa ramp takes times, and revenues were down in the Data Center group mostly because sales of the prior generation “Milan” CPUs are lower and the operating income was impacted by higher research and development costs and the lower CPU revenues.

Jean Hu, AMD’s chief financial officer, said on the call that in Q3, AMD expected Data Center group sales to be flattish year-on-year and up “double digits” sequentially. Flattish means somewhere around $1.6 billion perhaps and given what Su said about the second half being 50 percent larger for Data Center group compared to the first half, that puts Data Center revenues in Q4 at around $2.32 billion – what will clearly be AMD’s best quarter in the glass house in its history. And that would be over 40 percent growth year-on-year, too, which is impressive considering that Q4 2022 was the best revenue quarter in the datacenter for AMD to date. But as we said, it remains to be seen how profitable all of this Q4 2023 business will be. . . .

“In the datacenter market, we see a mixed environment as AI deployments are expanding,” Su explained on the call with Wall Street. “However, cloud customers continue optimizing their datacenter compute and enterprise customers remain cautious with new deployments. Against this backdrop, we expect strong growth driven by higher fourth gen Epyc and Ryzen 7000 processor sales and initial shipments of our Instinct MI300 accelerators in the fourth quarter. Longer term, while we are still in the very early days of the new era of AI, it is clear that AI represents a multibillion dollar growth opportunity for AMD across cloud, edge, and an increasingly diverse number of intelligent endpoints.”

AMD’s prognostications are for AI accelerators to drive over $150 billion in revenues across the IT industry by 2027, and AMD is increasing its research, development, and go-to-market spending to try to capture a larger piece of this pie.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

15 Comments

  1. I see that AMD’s been discounting Genoa some to help with demand issues so maybe gross margins will have to take a hit over the next few quarters but maybe the gross revenues will be better because of the discounting. MI300A will be nice for Large Language Models with loads of HBM3 high capacity stacks there as that’s an APU and the memory address space is unified so only pointers need to get passed between the CPU cores and the CDNA GPU compute accelerators and data movement will be kept to a minimum for greater power savings. And that’s a main selling point for MI300A and AI based systems for LLM usage and really AMD should look at that Patent they applied for a few years back that includes an FPGA on each of the same HBM stacks there for in memory compute offloading where the FPGA can be reprogrammed to whatever new AI algorithm best suits the workload there. There are Rumors of MI400 as well so AMD’s needing to double down on AI there to get some of that MAD investment and share valuation that Nvidia’s been experiencing lately.

  2. I would love to see 1 or 2 socket 19″ rack mount machines packing MI300A chips. I am not running a hyperscaling shop, just a boutique dev shop. Machines like that lashed together via fast LAN would offer some real horse power for training medium size models on deata sets that can fit in the HBM.

  3. Timothy –

    “There are over 30 instances based on Genoa worldwide, and all told, there are 670 instances worldwide powered by AMD CPUs and by the end of the year the company projects there will be nearly 900…”

    What does “instance” mean in this context?

    • I believe it means a named type instance, not every variation of memory and vCPUs of the instance type. So P5, not p5.4x-large. At least that is how I think about it.

  4. Given CPUs/servers are ~a duopoly, important context is to compare against the other player.

    Dont quote me, but I think the respective operating incomes were an amd loss of ~$200m vs $2b.

  5. It’s crazy to think that next week, with NVDA earnings numbers, green team DC revenue will be higher than AMD+INTC combined!
    And now that the new shiny and faster 141GB Hopper will only be available on GH200 module (so exclusively paired with Grace), NVDA CPU business will get an immense boost (in other words, using their GPU quasi-monopoly to push CPUs). Nvidia is becoming more and more the IBM of old days (CPU+GPU+DPU+Interconnect). Curious to see how competition will adjust and counter attack…

  6. There is no server recession. It’s very difficult for me to believe Next Platform taking part as a ‘mimicking syndicate repeater’ of this disinformation although I do count Next Platform among those operators albeit purely an influential observer.

    Majority of the server market is standardized on Skylake and Cascade Lakes and through 2022 there has been a very robust upgrade trend from v3/v4 to first generation Scalable as a validated known. Sales are very good. E7 and E5 4-way also have been on life extension as a high core count known including as a price performance competitor suspect tied to applications, in relation high core count Epyc that Skylake and Cascade Lakes also compete in terms of cores per multiprocessing platform. This is also now seen in Sapphire Rapids 4-way verse high core count Milan and Genoa.

    The issue is there are few, if any, known stable enterprise commodity offerings available in volume. Intel dumped some Ice into the channel at q2 run end perhaps some OEMs and enterprise customers might bite at this specific generation in relation Cascades for 2P commodity upgrade. Otherwise, the majority of the server market has nothing to do with hyperscale albeit PAAS (business of compute) passes as an enterprise augmenting compliment.

    Further, until AMD can supply in commodity mass market volume and the Intel Sapphire Rapid, Granite Rapid, Emerald Rapid applied computer science experiment through rapidly accelerated node jumps are complete in terms of platform stability and applications validation sustaining over every next iteration of obsolescence, the majority of the server market will continue to wait for known good leaving AMD, Intel and others to their hyperscale niche market. This is the issue. AMD, Intel, ARM server, are not producing commodity components and subsequently the vast majority of all server installations sit in wait because what they have works tied to their use case and price performance in relation new generation’s risk is superior. Known good commodity platforms work why fix it.

    For a validating proof of training ‘niche market’ on Nvidia side, NVDA data center accelerator volumes are down 60% from q4 albeit net take up 3x. AMD and Intel are no different in terms of their own volume limiters.

    Mike Bruzzone, Camp Marketing.

    • It’s simple, Mike. If you back out the AI servers that cost $300,000 to $400,000 each, the market revenues and shipments are way down. So aside from this very expensive niche, it’s down. Period. I’m not talking about CPU sales, I am talking about system shipments and revenues in general.

      • Tim, I get the primary ‘hyperscale’ server market. Profitable and if AMD, Intel, Arm server don’t address hyperscale needs Hyperscale will, and any design build or merchant supplier ignoring this volume niche in which they have how much component and system platform price making power, except apparently Nvidia, will be left out.

        What gets to me monitoring back 12 years of what’s trading and selling in commercial server, enterprise compute, is an exponential installed base left unaddressed in yesteryear product because primary production is not fit for commodity use?

        Meaning until 1st tier server OEMs are willing to bank some amount of primary production as channel surplus, without fear of that next platform inventory rotting on buyer beware, industry remains in a business of compute servitude. My concern is broad market industry economic renewal over design slave to hyperscale.

        Intel attempts to address the commodity market with Silver and bottom of gold grade that does well in the channel. But my take watching what trades, what trades-in, what resells, what Next Platform covers and maybe the is a CW topic, on what little primary production moves in relation to secondary suggests to me there is a large unaddressed product void in commodity data processing and commercial compute that gets down to design for use.

        Appreciate the rally. mb

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.