HPC

To Exascale And (Maybe) Beyond!

The difference between “high performance computing” in the general way that many thousands of organizations run traditional simulation and modeling applications and the kind of exascale computing that is only now becoming a little more commonplace is like the difference between a single, two door coupe that goes 65 miles

AI

Japan Gets An LLM Compliments Of Fujitsu And RIKEN

Very few organizations have enough iron to train a large language model in a reasonably short amount of time, and that is why most will be grabbing pre-trained models and then retraining the parameters in the models with much smaller datasets that are important to them. After that, they need

Connect

Greasing The Skids To Move AI From InfiniBand To Ethernet

Just about everybody, including Nvidia, thinks that in the long run, most people running most AI training and inference workloads at any appreciable scale – hundreds to millions of datacenter devices – will want a cheaper alternative for networking AI accelerators than InfiniBand. While Nvidia has argued that InfiniBand only

AI

Red Hat Saddles Up For The Wide Open GenAI Horizons

A theme snaking its way through conversations these days about generative AI is the need for open source models, open platforms, and industry standards as ways to make the emerging technology more accessible and widely adopted by enterprises. There is a growing number of open source models – or models

Cloud

AI Accelerates Cloud Revenues As Well As Cloud Investments

Three years ago, thanks in part to competitive pressures as Microsoft Azure, Google Cloud, and others started giving Amazon Web Services a run for the cloud money, the growth rate in quarterly spending on cloud services was slowing. In the last few quarters, the rate of spending has begun accelerating

Sign up to our newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

More Analysis

Edge

AI At The Edge Is Different From AI In The Datacenter

Today’s pace of business requires companies to find faster ways to serve customers, gather actionable insights, increase operational efficiency, and reduce costs. Edge-to-cloud solutions running AI workloads at the edge help address this demand. Placing compute power at the network edge, close to the data creation point, makes a vital

Compute

AMD Firing On All Compute Engine Cylinders

A few years ago, it was hard to imagine how AMD would have survived without re-entering the datacenter with its CPU and GPU compute engines. And now, it is hard to imagine how the chip maker could have possibly thrived without a revitalized GPU compute engine business. Intel knows a

AI

Building The Power Grid Of Tomorrow

The energy sector is undergoing a monumental shift as the power grid struggles to accommodate growing demand and the complexity of modern energy systems. A major driver of this change is the increasing use of distributed renewable energy sources such as wind, solar and geothermal. These have very different location

Edge

The Cutting Edge In Power Grid Management

Commissioned: Thanks to recent technological advancements, there are many different sources of electricity available which can help organizations address our growing demand for energy from power hungry devices such as electric vehicles while reducing their reliance on fossil fuels like petroleum, coal, and gas. It’s a diversity that also presents