HPC

Nvidia Makes Arm A Peer To X86 And Power For GPU Acceleration

Creating the Tesla GPU compute platform has taken Nvidia the better part of a decade and a half, and it has culminated in a software stack comprised of various HPC and AI frameworks, the CUDA parallel programming environment, compilers from Nvidia’s PGI division and their OpenACC extensions as well as open source GCC compilers, and various other tools that together account for tens of millions of lines of code and tens of thousands of individual APIs.

AI

Turning The CPU-GPU Hybrid System On Its Head

Sales of various kinds of high performance computing – not just technical simulation and modeling applications, but also cryptocurrency mining, massively multiplayer gaming, video rendering, visualization, machine learning, and data analytics – run on little boom-bust cycles that make it difficult for all suppliers to this market to make projections when they look ahead.