
Even AI Can’t Predict How Much Accelerated Iron The World Will Buy
As we have said many times here at The Next Platform, the only way to predict the actual future is to live it. …
As we have said many times here at The Next Platform, the only way to predict the actual future is to live it. …
As part of the pre-briefings ahead of the Google Cloud Next 2025 conference last week and then during the keynote address, the top brass at Google kept comparing a pod of “Ironwood” TPU v7p systems to the “El Capitan” supercomputer at Lawrence Livermore National Laboratory. …
If you want to be a leading in supplying AI models and AI applications, as well as AI infrastructure to run it, to the world, it is also helpful to have a business that needs a lot of AI that can underwrite the development of homegrown infrastructure that can be sold side-by-side with the standard in the industry. …
People – and when we say “people” we mean “Wall Street” as well as individual investors – sometimes have unreasonable expectations. …
The minute that search engine giant Google wanted to be a cloud, and the several years later that Google realized that companies were not ready to buy full-on platform services that masked the underlying hardware but wanted lower level infrastructure services that gave them more optionality as well as more responsibility, it was inevitable that Google Cloud would have to buy compute engines from Intel, AMD, and Nvidia for its server fleet. …
It has been more than a decade since Google figured out that it needed to control its own hardware fate when it came to the tensor processing that was going to be required to support machine learning algorithms. …
If Nvidia and AMD are licking their lips thinking about all of the GPUs they can sell to the hyperscalers and cloud builders to support their huge aspirations in generative AI – particularly when it comes to the OpenAI GPT large language model that is the centerpiece of all of the company’s future software and services – they had better think again. …
A year ago, at its Google I/O 2022 event, Google revealed to the world that it had eight pods of TPUv4 accelerators, with a combined 32,768 of its fourth generation, homegrown matrix math accelerators, running in a machine learning hub located in its Mayes County, Oklahoma datacenter. …
It seems like we have been talking about Google’s TPUv4 machine learning accelerators for a long time, and that is because we have been. …
In an ideal platform cloud, you would not know or care what the underlying hardware was and how it was composed to run your HPC – and now AI – applications. …
All Content Copyright The Next Platform