Nvidia’s Vera-Rubin Platform Obsoletes Current AI Iron Six Months Ahead Of Launch
Having an annual cadence for the improvement of AI systems is a great thing if you happen to be buying the newest iron at exactly the right time. …
Having an annual cadence for the improvement of AI systems is a great thing if you happen to be buying the newest iron at exactly the right time. …
Space has always been a premium in the datacenter, but the heat is on – quite literally – to drive up the density of GPU and XPU compute not just because real estate is expensive, but because latency is perhaps more expensive. …
Nvidia co-founder and chief executive officer Jensen Huang paced the stage at the company’s GTC Washington DC event dressed in his standard black leather jacket over a black T-shirt, with black pants and black sneakers, but the messages he delivered during his keynote was decidedly red, white, and blue. …
It is beginning to look like that the period spanning from the second half of 2026 through the first half of 2027 is going to be a local maximum in spending on XPU-accelerated systems for AI workloads. …
It has been clear for some time that Japan wants to have a certain amount of economic and technical independence when it comes to cloud computing in the Land of the Rising Sun. …
To a certain extent, Nvidia and AMD are not really selling GPU compute capacity as much as they are reselling just enough HBM memory capacity and bandwidth to barely balance out the HBM memory they can get their hands on, thereby justifying the ever-embiggening amount of compute their GPU complexes get overstuffed with. …
Nvidia co-founder and chief executive officer Jensen Huang did not do his OEM and ODM partners, who are the company’s main route to bring the infrastructure underpinning GPU systems to market, any favors when he suggested its “Hopper” GPU platforms would be blown away by their “Blackwell” kickers. …
High tech companies always have roadmaps. Whether or not they show them to the public, they are always showing them to key investors if they are in their early stages, getting ready to sell some shares on Wall Street to make money – literally, going public – or talking to key customers who are interested in buying a platform, not just a point product to solve a problem today. …
The expansion of the computing capacity in Europe for both traditional HPC simulation as well as AI training and modeling continues apace, with the Leibniz-Rechenzentrum lab in Germany announcing late last week (when we took a day of holiday) that it would be shelling out €250 million – about $262.7 million at current exchange rates – to build a hybrid CPU-GPU cluster based on Nvidia compute engines to tackle both kinds of high performance computing. …
It has become a well known fact these days that the switches that are used to interconnect distributed systems are not the most expensive part of that network, but rather it is the optical transceivers and fiber optic cables that comprise the bulk of the cost. …
All Content Copyright The Next Platform