With Blackwell GPUs, AI Gets Cheaper And Easier, Competing With Nvidia Gets Harder
If you want to take on Nvidia on its home turf of AI processing, then you had better bring more than your A game. …
If you want to take on Nvidia on its home turf of AI processing, then you had better bring more than your A game. …
It is a strange time in the generative AI revolution, with things changing on so many vectors so quickly it is hard to figure out what all of this hardware and software and people-hours costs and what it might be worth when it comes to transforming, well, just about everything. …
There is more than one way to get to a large language model with over 1 trillion parameters that can do lots of different things and enterprises can use to create AI training and inference infrastructure to extend and enrich their thousands of applications. …
The only way to accurately predict the future is to live it, but just the same, prognostication is one of the things that we humans love to do. …
Since the advent of distributed computing, there has been a tension between the tight coherency of memory and its compute within a node – the base level of a unit of compute – and the looser coherency over the network across those nodes. …
Because they are in the front of the line for acquiring Nvidia datacenter GPUs, the hyperscalers and cloud builders are going to be the ones who benefit mightily from shortages of matrix math engines that can train AI models and run inference against them. …
We said it from the beginning: There is no way that Meta Platforms, the originator of the Open Compute Project, wanted to buy a complete supercomputer system from Nvidia in order to advance its AI research and move newer large language models and recommendation engines into production. …
It takes big money as well as big ideas to compete in the generative AI space. …
UPDATED: It is funny what courses were the most fun and most useful when we look back at college. …
If you want to get the attention of server makers and compute engine providers and especially if you are going to be building GPU-laden clusters with shiny new gear to drive AI training and possibly AI inference for large language models and recommendation engines, the first thing you need is $1 billion. …
All Content Copyright The Next Platform