
The Separate But Equal AI Realms Of China And The US
China has lots of coal but it does not have a lot of GPUs or other kinds of tensor and vector math accelerators appropriate for HPC and AI. …
China has lots of coal but it does not have a lot of GPUs or other kinds of tensor and vector math accelerators appropriate for HPC and AI. …
Last year, amid all the talk of the “Blackwell” datacenter GPUs that were launched at last year’s GPU Technicval Conference, Nvidia also introduced the idea of Nvidia Inference Microservices, or NIMs, which are prepackaged enterprise-grade generative AI software stacks that companies can use as virtual copilots to add custom AI software to their own applications. …
The semiconductor manufacturing business is absolutely immense. To give the numbers some perspective, in 2024, chip makers generated revenues that were about three quarters of the size of the US defense budget and about two-thirds the size of the social services budget allocated by Congress. …
The AI boom has been very, very good to Taiwan Semiconductor Manufacturing Co, which is positioned to do well if Nvidia continues with its hegemony over AI training and inference or if the rebel alliance forms behind AMD or if the hyperscalers and cloud builders dedicate a substantial portion of their capital budgets to etching and packaging homegrown compute engines. …
As part of the pre-briefings ahead of the Google Cloud Next 2025 conference last week and then during the keynote address, the top brass at Google kept comparing a pod of “Ironwood” TPU v7p systems to the “El Capitan” supercomputer at Lawrence Livermore National Laboratory. …
Making a graphics card for gamers is one thing, but manufacturing a rackscale supercomputer with over 600,000 components that burns 120 kilowatts of power, that has over 5,000 copper cables for an all-to-all interconnect mesh for 72 dual-chip compute engines, and that weighs over 3,000 pounds is another thing entirely. …
If you want to be a leading in supplying AI models and AI applications, as well as AI infrastructure to run it, to the world, it is also helpful to have a business that needs a lot of AI that can underwrite the development of homegrown infrastructure that can be sold side-by-side with the standard in the industry. …
Compute engine makers can do all they want to bring the performance of their devices on par or even reasonably close to that of Nvidia’s various GPU accelerators, but until they have something akin to the NVLink and NVSwitch memory fabric that Nvidia uses to leverage the performance of many GPUs at bandwidths that dwarf PCI-Express switches and latencies that dwarf Ethernet interconnects, they can never catch up. …
It is funny how companies can find money – lots of money – when they think IT infrastructure spending can save them money, make them money, or do both at the same time. …
Spending on AI systems in 2024 just utterly blew by the expectations of the major market researchers and those who dabble in metrics like we do. …
All Content Copyright The Next Platform