
Skepticism About AI Use Does Not Yet Negate The Appetite For AI Hardware
People – and when we say “people” we mean “Wall Street” as well as individual investors – sometimes have unreasonable expectations. …
People – and when we say “people” we mean “Wall Street” as well as individual investors – sometimes have unreasonable expectations. …
The minute that search engine giant Google wanted to be a cloud, and the several years later that Google realized that companies were not ready to buy full-on platform services that masked the underlying hardware but wanted lower level infrastructure services that gave them more optionality as well as more responsibility, it was inevitable that Google Cloud would have to buy compute engines from Intel, AMD, and Nvidia for its server fleet. …
It has been more than a decade since Google figured out that it needed to control its own hardware fate when it came to the tensor processing that was going to be required to support machine learning algorithms. …
If Nvidia and AMD are licking their lips thinking about all of the GPUs they can sell to the hyperscalers and cloud builders to support their huge aspirations in generative AI – particularly when it comes to the OpenAI GPT large language model that is the centerpiece of all of the company’s future software and services – they had better think again. …
A year ago, at its Google I/O 2022 event, Google revealed to the world that it had eight pods of TPUv4 accelerators, with a combined 32,768 of its fourth generation, homegrown matrix math accelerators, running in a machine learning hub located in its Mayes County, Oklahoma datacenter. …
It seems like we have been talking about Google’s TPUv4 machine learning accelerators for a long time, and that is because we have been. …
In an ideal platform cloud, you would not know or care what the underlying hardware was and how it was composed to run your HPC – and now AI – applications. …
The inception of Google’s effort to build its own AI chips is quite well known by now but in the interests of review, we’ll note that as early 2013 the company envisioned machine learning could consume the majority of its compute time. …
Carey Kloss has been intimately involved with the rise of AI hardware over the last several years, most notably with his work building the first Nervana compute engine, which Intel captured and is rolling into two separate products: one chip for training, another for inference. …
Training deep neural networks is one of the more computationally intensive applications running in datacenters today. …
All Content Copyright The Next Platform