
Software And Services For Profits, AI Hardware Is Only Table Stakes
A pattern seems to be emerging across the original equipment manufacturers, or OEMs, who have been largely waiting on the sidelines to get into the generative AI riches story. …
A pattern seems to be emerging across the original equipment manufacturers, or OEMs, who have been largely waiting on the sidelines to get into the generative AI riches story. …
The generative AI revolution is making strange bedfellows, as revolutions and emerging monopolies that capitalize on them often do. …
Aside from all of the buzz that optics get in datacenter networking, copper is still king of the short haul. …
Several years ago, Subaru set a goal to stop fatal accidents involving its cars in 2030 and is leaning heavily on AI to reach the target. …
Generative AI and the various capacity and latency needs it has for compute and storage is muscling out almost every other topic when conversations turn to HPC and enterprise. …
It is not a coincidence that the companies that got the most “Hopper” H100 allocations from Nvidia in 2023 were also the hyperscalers and cloud builders, who in many cases wear both hats and who are as interested in renting out their GPU capacity for others to build AI models as they are in innovating in the development of large language models. …
How many cores is enough for server CPUs? All that we can get, and then some. …
At his company’s GTC 2024 Technical Conference this week, Nvidia co-founder and chief executive officer Jensen Huang, unveiled the chip maker’s massive Blackwell GPUs and accompanying NVLink networking systems, promising a future where hyperscale cloud providers, HPC centers, and other organizations of size and means can meet the rapidly increasing compute demands driven by the emergence of generative AI. …
Last November, we got a sneak peak at a supercomputer called “Ceiba” that would be built jointly by Nvidia and Amazon Web Services, based on its then-new GH200 Grace-Hopper CPU-GPU compute complexes, its NVLink Switch 3 memory fabric for GPUs, and the Elastic Fabric Adapter (EFA2) Ethernet interconnect that is custom designed by AWS to link racks of machines to each other. …
We like datacenter compute engines here at The Next Platform, but as the name implies, what we really like are platforms – how compute, storage, networking, and systems software are brought together to create a platform on which to build applications. …
All Content Copyright The Next Platform