Machine learning, arguably the most interesting and successful form of artificial intelligence, only worked because of the confluence of enormous amounts of data to train models and tremendous amounts of compute to chew on that data with many-layered statistical algorithms.
These days, having lots of data and plenty of compute is a foregone conclusion. And not just for the hyperscaler and cloud builder innovators that have driven the machine learning revolution and embedded inference up and down their application stacks to great economic reward. Large enterprises and high performance computing centers are also looking at ways of embedding machine learning into their applications or to create whole new ones to drive their businesses and research, respectively.
The frameworks and underlying algorithms of machine learning techniques are changing quickly, however, and that makes the planning and budgeting of the hardware and software to support these AI workloads a tricky task. Hitting a such moving target is difficult, and IT organizations are right to be skeptical about the prospects the widening field of devices to boost the performance of machine learning training and inferencing workloads.
History is littered with computing architectures that were chasing the next new platform and couldn’t quite catch it. Incumbents extend their existing architectures to become the safe bets, while others create specialized ASICs for compute or networking, or new kinds of storage devices, to boost the performance of that next platform. Sometimes the incumbents adapt enough to win, sometimes the upstarts carve out a new market and earn their niches.
It is tough to call the winners, especially with such a nascent workload such as machine learning. Absolute compute performance and efficiency may not win out over the ability to deploy machine learning on many different kinds of systems both inside the datacenter and out on the edge, and alongside other applications residing on the same systems.
We want to ask real questions to get to the heart of these issues on May 9th at The Glasshouse in San Jose and want you to be there to hear the answers. No PowerPoints, no vendor material, no imbalances between too high-level and too low-level. Just the in-depth conversations you’re used to reading–this time delivered live and with ample time for questions and conversation.
What we can be certain of is that there is still much uncertainty about the future of machine learning workloads – what happens when deep neural networks (DNNs) are replaced for certain kinds of machine learning by generative adversarial networks (GANs)?
How will the hardware have to change?
What kind of appetite will IT organizations have for specialized rather than general hardware?
What effect will this have on the systems software stack?
Will companies be willing to pay a premium for exotic AI systems that might have a short lifetime or limited scope?
What is the investment outlook for companies in the nascent AI chip startup space?
With little in the way of roadmaps, how will chip startups survive and thrive, if at all?
What are some of the novel architectures and post-Moore’s approaches that have more promise than others?
The evidence from the HPC space suggests not, except for the largest enterprises where systems, more than any other factor, determine the success or failure of the business. For that matter, will HPC and AI architectures, which have been converging for the past few years, diverge?
Come to The Next AI Platform event, featuring live one-on-interviews with the folks leading the charge to new infrastructure and find out.