Everybody But Nvidia And TSMC Has To Make It Up In Volume With AI
We keep seeing the same thing over and over again in the AI racket, and people keep reacting to it like it is a new or surprising idea. …
We keep seeing the same thing over and over again in the AI racket, and people keep reacting to it like it is a new or surprising idea. …
Here is a funny number to chew on. Sometime in the early part of 2026, if current trends persist, Google will have a spending rate on servers that is in excess of the inflation adjusted spending levels set by the entire world in the wake of the Dot Com bust. …
If the hyperscalers are masters of anything, it is driving scale up and driving costs down so that a new type of information technology can be cheap enough so it can be widely deployed. …
As we have said many times here at The Next Platform, the only way to predict the actual future is to live it. …
As part of the pre-briefings ahead of the Google Cloud Next 2025 conference last week and then during the keynote address, the top brass at Google kept comparing a pod of “Ironwood” TPU v7p systems to the “El Capitan” supercomputer at Lawrence Livermore National Laboratory. …
If you want to be a leading in supplying AI models and AI applications, as well as AI infrastructure to run it, to the world, it is also helpful to have a business that needs a lot of AI that can underwrite the development of homegrown infrastructure that can be sold side-by-side with the standard in the industry. …
People – and when we say “people” we mean “Wall Street” as well as individual investors – sometimes have unreasonable expectations. …
The minute that search engine giant Google wanted to be a cloud, and the several years later that Google realized that companies were not ready to buy full-on platform services that masked the underlying hardware but wanted lower level infrastructure services that gave them more optionality as well as more responsibility, it was inevitable that Google Cloud would have to buy compute engines from Intel, AMD, and Nvidia for its server fleet. …
It has been more than a decade since Google figured out that it needed to control its own hardware fate when it came to the tensor processing that was going to be required to support machine learning algorithms. …
If Nvidia and AMD are licking their lips thinking about all of the GPUs they can sell to the hyperscalers and cloud builders to support their huge aspirations in generative AI – particularly when it comes to the OpenAI GPT large language model that is the centerpiece of all of the company’s future software and services – they had better think again. …
All Content Copyright The Next Platform