Debunking Datacenter Compute Myths, Part Two

Welcome to the second part of our Debunking Datacenter Compute Myths series. In the first part of this series, which you can see here, as well as in this second part, The Next Platform sat down with Lynn Comp, vice president in AMD’s server business unit, to talk about some persistent datacenter compute myths that need to be addressed.

Here are the next five that we discussed:

  • Myth 6: You need an entirely new architecture or ISA if you want to really run an efficient cloud native environment at scale.
  • Myth 7: A monolithic die is better than a chiplet architecture.
  • Myth 8: You can’t migrate VMs across CPU vendors within the X86 architecture.
  • Myth 9: Two-socket servers offer better resiliency than single-socket servers.
  • Myth 10: You need to change applications to support the security features in modern processors.

You undoubtedly have your own strong opinions about these, and may have some others to suggest, so feel free to comment on these two stories to expand the conversation and keep it going.

This content was sponsored by AMD.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

6 Comments

  1. Quite an animated part 2, even while discussing restful APIs, and all leading to the MythBusters-style Grand Finale, a huge explosive deflagration (or not?)! (eh-eh-eh)

    These Myths (6-10) are interesting because, as noted by both interviewee and interviewer, they may have held true in the past, at earlier time points in tech development. Much innovation was produced over the years to turn them into what are now myths (which is great!).

    One thing about Myth 7 though is that we do, today, have one company that is doing something quite exceptional with an ultra-monolithic die: Cerebras (wafer-scale chippery). One wonders about the potential for this tech to be re-implemented in a Lego-style (to quote TPM) chiplet approach, distributed over several packages, while maintaining the same performance, and possibly moving into FP64 as well!

    • Lego cephalopods with reconfigurable translucent CPO tentacles sounds about right. Then again, it seems that Google’s TPUs are not super for branching linear algebra, sparse memory access, and high-precision math ( https://cloud.google.com/tpu/docs/intro-to-tpu ). Therefore, the Legos will need to be rather beefy, like EPYC Zen Duplos I think! 8^p

  2. Great second part of the myth-busting interview sequence! There’s probably a myth to be debunked about CoWoS as well, but I’m not 100% sure which, maybe something like:

    Myth 11: CoWoS woes will set HPC/AI/ML developments back a whole decade, or more(?).

    (P.S. my understanding, which could be wrong, is that CoWoS is mostly an issue for chips with HBM, yes?)

  3. My question is: Is industry too feudal obsessed with moat?

    Google said it “Missing The Moat With AI” in TNP May 4 2023. Nvidia has big Cuda moat but TNP October 12 2023 caution: “You can build a moat, but you can’t drink it”! I like AMD Dr. Su don’t believe in moats and prefer drawbridge to medieval confinement!

    But maybe big moat make trillion dollar market cap easier shot for sAMMANTAs to buy small country or two?

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.