Nvidia’s Vera-Rubin Platform Obsoletes Current AI Iron Six Months Ahead Of Launch
Having an annual cadence for the improvement of AI systems is a great thing if you happen to be buying the newest iron at exactly the right time. …
Having an annual cadence for the improvement of AI systems is a great thing if you happen to be buying the newest iron at exactly the right time. …
It is beginning to look like that the period spanning from the second half of 2026 through the first half of 2027 is going to be a local maximum in spending on XPU-accelerated systems for AI workloads. …
To a certain extent, Nvidia and AMD are not really selling GPU compute capacity as much as they are reselling just enough HBM memory capacity and bandwidth to barely balance out the HBM memory they can get their hands on, thereby justifying the ever-embiggening amount of compute their GPU complexes get overstuffed with. …
There are many reasons why Nvidia is the hardware juggernaut of the AI revolution, and one of them, without question, is the NVLink memory sharing port that started out on its “Pascal” P100 GOU accelerators way back in 2016. …
High tech companies always have roadmaps. Whether or not they show them to the public, they are always showing them to key investors if they are in their early stages, getting ready to sell some shares on Wall Street to make money – literally, going public – or talking to key customers who are interested in buying a platform, not just a point product to solve a problem today. …
There are many things that are unique about Nvidia at this point in the history of computing, networking, and graphics. …
The generative AI revolution is making strange bedfellows, as revolutions and emerging monopolies that capitalize on them often do. …
If you stare at something for a little bit of time and let your mind wander, you can think of a new way to analyze something that you have looked at a bunch of times. …
If high bandwidth memory was widely available and we had cheap and reliable fusion power, there never would have been a move to use GPU and other compute engines as vector and matrix math offload engines. …
We like datacenter compute engines here at The Next Platform, but as the name implies, what we really like are platforms – how compute, storage, networking, and systems software are brought together to create a platform on which to build applications. …
All Content Copyright The Next Platform