Big Blue Can Still Catch The AI Wave If It Hurries
It has been two and a half decades since we have seen a rapidly expanding universe of a new kind of compute that rivals the current generative AI boom. …
It has been two and a half decades since we have seen a rapidly expanding universe of a new kind of compute that rivals the current generative AI boom. …
As 2023 has progressed, the revenue growth projections at Arista Networks have inched up as its supply chain issues are getting resolved. …
Timing is a funny thing. The summer of 2006 when AMD bought GPU maker ATI Technologies for $5.6 billion and took on both Intel in CPUs and Nvidia in GPUs was the same summer when researchers first started figuring out how to offload single-precision floating point math operations from CPUs to Nvidia GPUs to try to accelerate HPC simulation and modeling workloads. …
Because they are in the front of the line for acquiring Nvidia datacenter GPUs, the hyperscalers and cloud builders are going to be the ones who benefit mightily from shortages of matrix math engines that can train AI models and run inference against them. …
If you had to sum up the second half of 2022 and the first half of 2023 from the perspective of the semiconductor industry, it would be that we made too many CPUs for PCs, smartphones, and servers and we didn’t make enough GPUs for the datacenter. …
When hyperscalers and cloud builders think about their infrastructure, they talk about megawatts and they think about the mix of serving and storage and the total capacity that is delivered in a megawatt of power. …
It would be hard to find something that is growing faster than the Nvidia datacenter business, but there is one contender: OpenAI. …
If there is one thing that is absolutely immune from inflationary curbs and that is, to a certain degree, also contributing to inflationary pressures in the global economy, it is generative AI. …
We said it from the beginning: There is no way that Meta Platforms, the originator of the Open Compute Project, wanted to buy a complete supercomputer system from Nvidia in order to advance its AI research and move newer large language models and recommendation engines into production. …
If you have the entire corpus of the Internet scrubbed of nonsense plus whatever else you can scrounge up in whatever language all put into the right format so you can chew on that data one token at a time with trillions of parameters of interconnections between those tokens to build a large language model for generative AI applications, you have an enormous problem. …
All Content Copyright The Next Platform