AI

Ampere Computing Buys An AI Inference Performance Leap

Machine learning inference models have been running on X86 server processors from the very beginning of the latest – and by far the most successful – AI revolution, and the techies that know both hardware and software down to the minutest detail at the hyperscalers, cloud builders, and semiconductor manufacturers have been able to tune the software, jack the hardware, and retune for more than a decade.

Compute

Inside Facebook’s Future Rack And Microserver Iron

The hyperscalers and cloud builders have been setting the pace for innovation in the server arena for the past decade or so, particularly and publicly since Facebook set up the Open Compute Project in April 2011 and ramping up as Microsoft joined up in early 2014 and basically created a whole new server innovation stream that was unique from – and largely incompatible with – the designs put out by Facebook.