With Blackwell GPUs, AI Gets Cheaper And Easier, Competing With Nvidia Gets Harder
If you want to take on Nvidia on its home turf of AI processing, then you had better bring more than your A game. …
If you want to take on Nvidia on its home turf of AI processing, then you had better bring more than your A game. …
It is a pity that we can’t make silicon wafers any larger than 300 millimeters in diameter. …
If you handle hundreds of trillions of AI model executions per day, and are going to change that by one or two orders of magnitude as GenAI goes mainstream, you are going to need GPUs. …
Note: This story augments and corrects information that originally appeared in Half Eos’d: Even Nvidia Can’t Get Enough H100s For Its Supercomputers, which was published on February 15. …
It is a strange time in the generative AI revolution, with things changing on so many vectors so quickly it is hard to figure out what all of this hardware and software and people-hours costs and what it might be worth when it comes to transforming, well, just about everything. …
It is beginning to look like the Dell Technologies and Hewlett Packard Enterprose, the world’s two biggest original equipment manufacturers, are finally going to start benefitting from the generative AI wave, mainly because they are finally getting enough allocations of GPUs from Nvidia and AMD that they can start addressing the needs of customers who don’t happen to be among the hyperscalers and largest cloud builders. …
There is more than one way to get to a large language model with over 1 trillion parameters that can do lots of different things and enterprises can use to create AI training and inference infrastructure to extend and enrich their thousands of applications. …
Here is a history question for you: How many IT suppliers who do a reasonable portion of their business in the commercial IT sector – and a lot of that in the datacenter – have ever broken through the $100 billion barrier? …
Riding high on the AI hype cycle, Lambda – formerly known as Lambda Labs and well known to readers of The Next Platform – has received a $320 million cash infusion to expand its GPU cloud to support training clusters spanning thousands of Nvidia’s top specced accelerators. …
Note: There is a story called A Tale Of Two Nvidia Eos Supercomputers that augments and corrects information that originally appeared in this story as it was published on February 15. …
All Content Copyright The Next Platform