Generalizing a Hardware, Software Platform for Industrial AI

Industrial companies have replaced people with machines, systems analysts with simulations, and now the simulations themselves could be outpaced by machine learning—albeit with a human in the loop, at the beginning at least.

The new holy grail of machine learning and deep learning, as with almost any other emerging technology set, is to mask enough of the complexity to make it broadly applicable without lose the performance and other features that can be retained by taking a low-level approach. If this kind of deep generalization can happen, a new mode of considering how data is used in research and enterprise can take shape—one that is based on a far more capable, richer analytical approach than traditional statistical insights might provide. The problem, of course, is that hiding the complexity while leaving the functionality is not simple, both from a hardware and algorithmic perspective.

There are some markets with complex problems, and high domain expertise, but the experience with deep learning frameworks is low. Among these are industrial use cases—from optimization and control of physical devices to other complex problems in large-scale logistics. These are areas where simulations have traditionally been key to optimize and control machines or logistics systems, but implementing an intelligent approach—one that learns through experience via data and domain expert input—can add value.

This specific set of use cases is where machine learning startup is putting its research into practice—and it seems to be gaining traction with $13.6 million raised so far and one of their users, Siemens, as both an investor and end user. The company’s co-founder and CEO, Mark Hammond, tells The Next Platform that although for now their early customers are getting quite a bit of one-on-one attention with moving their optimization and control problems into the company’s platform, the goal is to bring simulation data for physical and logistics problems directly to the platform with its custom-built compiler that whittles down to TensorFlow via their Inkling programming language for true self-service machine learning and deep learning for non-experts.

Hammond, who has a background in computer science and neuroscience, which he has put into practice at startup Numenta and Microsoft, says his company wants to do for machine learning what databases did for data. “In the same way databases gave users a new language like SQL to program the intent for many types of business questions with the ability to specify structures and leave low level management to the technology, we want to do the same for AI.” is working with one of their early access customers, Siemens, and their motion control group with its many industrial robotics and manufacturing systems they need intelligent control for. These are systems that are already simulated, which provides the main touch point for machine learning training and inference, along with input from the robotics or systems experts that understand the machines at Siemens. “In situations like this, we can leverage traditional things like computer vision but also use deep reinforcement techniques to learn behaviors they care about based on data and domain expertise,” Hammond explains. “When using our system, it’s less about building a giant dataset to learn from and more about how to programmatically specify key concepts needed to teach the system using a simulated environment that is then branched out into the physical system.” is highlighting emerging use cases in industrial areas that might not sound like a traditional fit for something that is based on TensorFlow at its core. For instance, HVAC companies have complex simulations for systems in large building, which include CFD elements and other factors, including the movement of people throughout the building. The static data from these simulations, coupled with the real-time environment, can lead to optimized HVAC systems based on machine learning, Hammond explains. This is also true for transportation and logistics companies that have simulations that can be improved upon with a reinforcement learning approach over time.

Even though these use cases make sense, there is an argument to be made that this is too heavy of a tool for the job—in short, even if Bonsai wants to abstract some of the complexity of a learning system, there are still some important investments in time and potentially hardware for these companies. So how does an AI startup go about convincing industrial companies that optimizations (when they already optimize via simulations and in-house domain experts) are worth their weight in gold and the investment in time and learning something new?

Hammond says that getting a foot in the door to talk to large industrial companies about implementing machine learning is the easy part, but making them see how it fits into their larger workflows is the challenge. The even taller hurdle is connecting them to an ROI in theory, which is why works with Nvidia and IBM on leasing Minsky boxes for proof of concepts at this early stage. He says that the training times can vary for customers, depending on the scale of problems they’re trying to solve and indeed, getting these companies on board with the types of hardware setups required to do deep learning at scale on-prem along with implementing a new element into their workflows can sound daunting.

“The training times, even on good hardware, can be lengthy and that is an area of concern and focus. We have a cloud based environment we can leverage for training that can connect with customer simulators, whether local or hosted. Even though we could use cloud, many of these users still see a lot of advantage in doing this on prem, so we work with Minsky and DGX-1 appliances for those who want to train on powerful machines, then when it comes time for inference, we don’t need to worry about having that kind of computational horsepower—our platform can manage that transparently; it becomes a streaming analytics platform.”

As we start to see more startups adding abstraction layers on top of deep learning frameworks, we have to pay close attention to what is lost in terms of results and what is gained in terms of ROI over traditional analytics or simulations. At this stage, it’s still an uphill climb for deep learning startups to get production engagements at big companies inside critical parts of their workflows because the space is still developing.

The bigger outstanding question as companies like and others set about to put deep learning into closer reach for companies that don’t fit the traditional image/video recognition mold for most deep learning use cases at scale is how to generalize very specific problems without getting into the weeds of low-level coding and training for those problems. In short, with deep learning platforms and use cases still in the development stages by and large at major companies, how can the ultra-specific be generalized for both hardware and software?

Bonsai is taking aim at this question with a custom-developed compiler and the Inkling language, but what is needed in the meantime are more industrial use cases in optimization and physical system control that show the bare metal, hand-held deep learning frameworks compared against traditional ways of solving the same problem.

In short, AI boils down to ROI—there’s a lot of noise about one, and not nearly enough about the other.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

Be the first to comment

Leave a Reply

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.