This story has been temporarily removed. If you want to learn more about Inspur’s machine learning hardware, check it out here.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now
Related Articles
Feeding The Datacenter Inference Beast A Heavy Diet Of FPGAs
Any workload that has a complex dataflow with intricate data needs and a requirement for low latency should probably at least consider an FPGA for the job. FPGAs have, of course, been operating in the datacenter for three decades, usually under the skins of some appliance, but FPGAs are coming …
Could FPGAs Outweigh CPUs In Compute Share?
Over a decade ago we would not have expected accelerators to have be commonplace in the datacenter. While they are not pervasive, a host of new workloads are ripe for acceleration and porting work has made it possible for legacy applications to offload for a performance boost. This transition has …
Celestial AI Wants To Break The Memory Wall, Fuse HBM With DDR5
In 2024, there is no shortage of interconnects if you need to stitch tens, hundreds, thousands, or even tens of thousands of accelerators together. Nvidia has NVLink and InfiniBand. Google’s TPU pods talk to one another using optical circuit switches (OCS). AMD has its Infinity Fabric for die-to-die, chip-to-chip, and …
Be the first to comment