AI

Nvidia Is The Only AI Model Maker That Can Afford To Give It Away

An alien flying in from space aboard a comet would look down on Earth and see that there is this highly influential and famous software company called Nvidia that just so happens to have a massively complex and ridiculously profitable hardware business running a collection of proprietary and open source software that about three quarters of its approximately 40,000 employees create.

Compute

How Sustainable Is This Crazy Server Spending?

We can all talk until we are blue in the face about how weird it is for so much money to be spent on servers during the GenAI boom, but after reviewing the latest market report from IDC – which is one again but sporadically giving out some stats to the public – we thought that to feel the full impact of this change, we should draw you a picture of the past 26 years of server revenues by quarter so you can take it all in.

Store

What Do You Do When You Want GPFS On The Cloud?

While there are a lot of different file system and object storage options available for HPC and AI customers, many AI organizations and a lot of traditional HPC simulation and modeling centers choose either the open source Lustre parallel file system or the modern variants of IBM’s General Parallel File System (GPFS), known previously as Spectrum Scale and now known as IBM Storage Scale, as the storage underpinning of their applications.

HPC

Driving HPC Performance Up Is Easier Than Keeping The Spending Constant

We are still mulling over all of the new HPC-AI supercomputer systems that were announced in recent months before and during the SC25 supercomputing conference in St Louis, particularly how the slew of new machines announced by the HPC national labs will be advancing not just the state of the art, but also pushing down the cost of the FP64 floating point operations that still drives a lot of HPC simulation and modeling work.