This story has been temporarily removed. If you want to learn more about Inspur’s machine learning hardware, check it out here.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now
Related Articles
Doing The Math On CPU-Native AI Inference
A number of chip companies — importantly Intel and IBM, but also the Arm collective and AMD — have come out recently with new CPU designs that feature native Artificial Intelligence (AI) and its related machine learning (ML). The need for math engines specifically designed to support machine learning algorithms, …
AI Is A Modest – But Important – Slice Of TSMC’s Business
Given the exorbitant demand for compute and networking for running Ai workloads and the dominance of Taiwan Semiconductor Manufacturing Co in making the compute engine chips and providing the complex packaging for them, you would think that the world’s largest foundry would be making money hands over fist in the …
The Accelerated Path To Petabyte-Scale Graph Databases
Database acceleration using specialized co-processors is nothing new. Just to give a few examples, data warehouses running on the Netezza platform, owned by IBM for more than a decade now, uses a custom and parallelized PostgreSQL database matched to FPGA acceleration for database and storage routines. OmniSci, Sqream Technologies, Kinetica, …

