
Testing Out HPC On Google’s TPU Matrix Engines
In an ideal platform cloud, you would not know or care what the underlying hardware was and how it was composed to run your HPC – and now AI – applications. …
In an ideal platform cloud, you would not know or care what the underlying hardware was and how it was composed to run your HPC – and now AI – applications. …
Frustrated by the limitations of Ethernet, Google has taken the best ideas from InfiniBand and Cray’s “Aries” interconnect and created a new distributed switching architecture called Aquila and a new GNet protocol stack that delivers the kind of consistent and low latency that the search engine giant has been seeking for decades. …
Google and VMware have announced a new element to their partnership that the two companies said will simplify cloud migrations, provide more flexibility, and help companies modernize their enterprise applications with a minimum amount of pain. …
When the hyperscalers, the major datacenter compute engine suppliers, and the three remaining foundries with advanced node manufacturing capabilities launch a standard together on Day One, this is an unusual, significant, and pleasant surprise. …
Thomas Kurian’s arrival at Google Cloud in early 2019 after more than 22 years at Oracle marked a significant shift in Google’s thinking, putting an emphasis on expanding its cloud’s business use by enterprises as the key to making up ground on Amazon Web Services (AWS) and Microsoft Azure in the booming global cloud market. …
Search engine and cloud computing juggernaut Google is hosting its Google Cloud Next ’21 conference this week, and one of the more interesting things that the company unveiled is several layers of software that makes its Spanner globally distributed relational database look and feel like the popular open source PostgreSQL relational database. …
As Google’s batch sizes for AI training continue to skyrocket, with some batch sizes ranging from over 100k to one million, the company’s research arm is looking at ways to improve everything from efficiency, scalability, and even privacy for those whose data is used in large-scale training runs. …
Famed computer architect, professor, author, and distinguished engineer at Google, David Patterson, wants to set the record straight on common misconceptions about carbon emissions and datacenter efficiency for large-scale AI training. …
In a world where Moore’s Law is slowing and hardware has to be increasingly co-designed with the system software stack and the applications that run above it, the matrix of possible combinations of hardware is getting wider and deeper. …
The things we like best about watching the high end of the IT sector are seeing new technologies come out that have the potential to change the IT landscape and then seeing some market data that proves a technology either did or did not foment the expected change. …
All Content Copyright The Next Platform