One On One With Jensen Huang: Nvidia, The Platform Company
While a lot of ideas are ancient, some are relatively new and can come from only a modern context. …
While a lot of ideas are ancient, some are relatively new and can come from only a modern context. …
AI is too hard for most enterprises to adopt, just like HPC was and continues to be. …
For about a decade, Intel has sold GPUs, in recent years with its integrated CPU-GPU devices used in client and entry servers. …
Back in April, when we were talking with Nvidia co-founder and chief executive officer Jensen Huang about the datacenter being the new unit of compute, we explained that we were always disappointed with the fact that Nvidia did not bring its “Denver” hybrid Arm CPU and Nvidia GPU, previewed way back in January 2011, to market, and said further we really wanted Nvidia to redefine what a CPU is by breaking its memory and I/O truly free from its compute. …
The hardest job at any chip designer that doesn’t actually own its own foundry – and maybe even those that do – is figuring out what wafer start commitment level to make for a new compute engine in the datacenter. …
The term “general purpose” in regards to compute is an evolving one. …
Hardware accelerated databases are not new things. More than twenty years ago, Netezza was founded and created a hybrid hardware architecture that ran PostgreSQL on a big, wonking NUMA server running Linux and accelerated certain functions with adjunct accelerators that were themselves hybrid CPU-FPGA server blades that also stored the data. …
When you have 54.2 billion transistors to play with, you can pack a lot of different functionality into a computing device, and this is precisely what Nvidia has done with vigor and enthusiasm with the new “Ampere” GA100 GPU aimed at acceleration in the datacenter. …
As the name of this publication suggests, we are system thinkers and we like to watch the evolution of a collection of tools into a platform. …
It is hard to remember sometimes way back when, in 2008, as Nvidia first took a stab at GPU compute in the datacenter with the original Tesla GPU accelerators and a very rudimentary CUDA programming environment for offloading parallel algorithms from CPUs to GPUs. …
All Content Copyright The Next Platform