Tomorrow’s Datacenter Won’t Be Like Yesterday’s – Here’s Why
The way that organizations plan, design and run a datacenter was already under pressure. …
The way that organizations plan, design and run a datacenter was already under pressure. …
We can all talk until we are blue in the face about how weird it is for so much money to be spent on servers during the GenAI boom, but after reviewing the latest market report from IDC – which is one again but sporadically giving out some stats to the public – we thought that to feel the full impact of this change, we should draw you a picture of the past 26 years of server revenues by quarter so you can take it all in. …
While there are a lot of different file system and object storage options available for HPC and AI customers, many AI organizations and a lot of traditional HPC simulation and modeling centers choose either the open source Lustre parallel file system or the modern variants of IBM’s General Parallel File System (GPFS), known previously as Spectrum Scale and now known as IBM Storage Scale, as the storage underpinning of their applications. …
We keep seeing the same thing over and over again in the AI racket, and people keep reacting to it like it is a new or surprising idea. …
Did people complain – and by people, we mean Wall Street – as the world’s largest bookseller invested huge amounts of money to transform itself into an alternative to driving to Wal-Mart? …
We are still mulling over all of the new HPC-AI supercomputer systems that were announced in recent months before and during the SC25 supercomputing conference in St Louis, particularly how the slew of new machines announced by the HPC national labs will be advancing not just the state of the art, but also pushing down the cost of the FP64 floating point operations that still drives a lot of HPC simulation and modeling work. …
This year, about 45 percent of the revenues at Big Blue will come from software. …
Updated: We have obtained new information in the wake of publishing our story. …
It was only a matter of time before Marvell was going to make another silicon photonics acquisition, and the $2.5 billion sale of its automotive Ethernet business to Infineon for $2.5 billion has given the company this past summer netted out to about half of the $3.25 billion that the company is shelling out to get its hands on Celestial AI, one of the several upstarts that hopes to hook compute engines, memory, and switches together using on-chip optical engines and light pipes. …
The AI model makers of the world have been waiting for more than a year to get their hands on the Trainium3 XPUs, which have been designed explicitly for both training and inference and which present a credible alternative to Nvidia’s “Blackwell” B200 and B300 GPUs as well as Google’s “Trillium” TPU v6e and “Ironwood” TPU v7p accelerators. …
All Content Copyright The Next Platform