OpenAI Declares Its Hardware Independence (Sort Of) With Stargate Project
The dependency dance between AI pioneer OpenAI and the Microsoft Azure cloud and the application software divisions of its parent company are fascinating to watch. …
The dependency dance between AI pioneer OpenAI and the Microsoft Azure cloud and the application software divisions of its parent company are fascinating to watch. …
If Amazon is going to make you pay for the custom AI advantage that it wants to build over rivals Google and Microsoft, then it needs to have the best models possible running on its homegrown accelerators. …
Think of it as the ultimate offload model.
One of the geniuses of the cloud – perhaps the central genius – is that a big company that would have a large IT budget, perhaps on the order of hundreds of millions of dollars per year, and that has a certain amount of expertise creates a much, much larger IT organization with billions of dollars – and with AI now tens of billions of dollars – in investments and rents out the vast majority of that capacity to third parties, who essentially allow that original cloud builder to get their own IT operations for close to free. …
When you need to provide electricity to power and cool 100,000 accelerators, or maybe even 1 million of them in a few years, in a single location to run an AI model, you have to start thinking about the unthinkable if you also want to use carbon-free juice to power your AI ambitions. …
Here’s a question for you: How much of the growth in cloud spending at Microsoft Azure, Amazon Web Services, and Google Cloud in the second quarter came from OpenAI and Anthropic spending money they got as investments out of the treasure chests of Microsoft, Amazon, and Google? …
Three years ago, thanks in part to competitive pressures as Microsoft Azure, Google Cloud, and others started giving Amazon Web Services a run for the cloud money, the growth rate in quarterly spending on cloud services was slowing. …
It is not a coincidence that the companies that got the most “Hopper” H100 allocations from Nvidia in 2023 were also the hyperscalers and cloud builders, who in many cases wear both hats and who are as interested in renting out their GPU capacity for others to build AI models as they are in innovating in the development of large language models. …
We have been tracking the financial results for the big players in the datacenter that are public companies for three and a half decades, but starting last year we started dicing and slicing the numbers for the largest IT suppliers for stuff that goes into datacenters so we can give you a better sense what is and what is not happening out there. …
The first thing to note about the rumored “Stargate” system that Microsoft is planning to build to support the computational needs of its large language model partner, OpenAI, is that the people doing the talking – reportedly OpenAI chief executive officer Sam Altman – are talking about a datacenter, not a supercomputer. …
At his company’s GTC 2024 Technical Conference this week, Nvidia co-founder and chief executive officer Jensen Huang, unveiled the chip maker’s massive Blackwell GPUs and accompanying NVLink networking systems, promising a future where hyperscale cloud providers, HPC centers, and other organizations of size and means can meet the rapidly increasing compute demands driven by the emergence of generative AI. …
All Content Copyright The Next Platform