Changes Go Far Beyond Just AI, Machine Learning

SPONSORED SambaNova Systems is a technology startup founded in 2017 by a group of far-sighted engineers and data scientists who saw that the current approaches to AI and machine learning were beginning to run out of steam, and that an entire new architecture would be necessary in order to make AI accessible for everyone as well as deliver the scale, performance, accuracy and ease of use needed for future applications.

AI and machine learning in particular have grown over the past decade to become key tools for processing and making sense of large and complex data sets. This trend is set to continue, with IDC forecasting that the overall AI software market will approach $240 billion in revenue in 2024, up from $156 billion in 2020.

But AI is no longer just for supercomputing. The volumes of data collected by organizations have become large and complex data sets, leading to machine learning being incorporated into all manner of applications from natural language processing (NLP), high-resolution computer vision, recommendations and high performance computing (HPC) to everyday business processes.

However, the success of this approach has led to machine learning models becoming larger, partly in order to increase accuracy, and this has led to more and more compute power being needed to operate them. Currently, the state of the art is represented by systems with hugely powerful multi-core CPUs paired with GPUs, the latter often running to thousands of cores optimized for maximum floating-point throughput. If this trend were to continue, it could hamper the development of more advanced machine learning models and mean that advanced AI would soon be out of the reach of all but the largest organisations.

The limitations of existing architectures when it comes to handling AI is something that SambaNova’s founders were likely more aware of than many others in the IT industry. Kunle Olukotun, the company’s co-founder and Chief Technologist, carried out some pioneering work on multi-core processor architectures as a Professor at Stanford University, and helped found a company called Afara Websystems to bring the technology to market.

Afara was acquired by Sun Microsystems, where its technology laid the groundwork for Sun’s ‘Niagara’ Sparc T1 family of processors, and Olukotun returned to Stanford where he started looking at how to develop software to make full use of the capabilities of multi-core processors.

This led to the concept of domain-specific languages, an idea now widely adopted for performing machine learning tasks, and this eventually fed into the concepts that SambaNova was formed to commercialize – ways of developing hardware and software that will make AI technology much more accessible.

SambaNova’s co-founder and CEO Rodrigo Liang also worked for Afara and stayed on after the Sun acquisition until 2017 to oversee SPARC processor development. His combination of business and technical experience made him the logical choice for CEO when SambaNova was formed to develop a platform designed from the ground up for machine learning and analytics.

The third co-founder Christopher Ré also works at Stanford University, where he is an associate professor in the Department of Computer Science affiliated with the Statistical Machine Learning Group and Stanford AI Lab. Based on his research into machine learning systems, Ré co-founded Lattice.io, a data mining and machine learning startup that was acquired by Apple in 2017. Subsequently, he helped to found SambaNova Systems, based in part on his work on accelerating machine learning.

The founders’ backgrounds and expertise in hardware, software and chip design, scale-out architecture and machine learning enabled them to take a step back from familiar existing compute architectures. Starting from scratch, they have built an integrated system of software and hardware focused on the data processing needs of current and emerging applications.

Their proposition certainly seems to have impressed investors. When SambaNova emerged from stealth mode in 2018, it already had $56 million in Series A funding led by Walden International and Google Ventures, with participation from Redline Capital and Atlantic Bridge Ventures.

This was followed in 2019 with a Series B funding round of $150 million, this time led by Intel Capital, with additional participation from Google Ventures, Walden International, Atlantic Bridge Ventures, and Redline Capital. A Series C funding round followed in 2020, providing the firm with $250 million led by funds and accounts managed by BlackRock with participation from existing investors.

SambaNova’s technology is largely based on research undertaken by Olukotun and Ré, which focused on workflow, and specifically the flow of data, rather than on the iterative instructions seen in traditional processors.

If you consider the development of machine learning applications, these are largely handled using high-level frameworks like PyTorch or TensorFlow. Such frameworks generate a dataflow graph with the nodes of the graph made up of machine learning operators like convolution and matrix multiply.

The research found that these operators can be described in terms of parallel patterns that represent parallel computation on dense and sparse data collections, along with corresponding memory access patterns.

To efficiently process these parallel patterns, the SambaNova team developed a chip that it calls the Reconfigurable Dataflow Unit (RDU). This is described as a next-generation processor comprised of a tiled pattern of reconfigurable compute and memory units. These are linked with a communication fabric that can be programmed to represent the dataflow of the parallel patterns.

This enables high utilization of the underlying hardware while allowing a diverse set of models to be easily implemented using any framework of choice, by simply reconfiguring the fabric and the compute and memory units to match.

But with the complete SambaNova DataScale platform, which is offered both ‘as-a-service’ and as an on-premises solution, the software is an equally important piece of the puzzle. SambaFlow is a complete software stack that takes input from standard machine learning frameworks such as PyTorch and TensorFlow, and largely automates the compilation, optimization and execution of the models onto all the RDUs in the system.

This approach is already showing promise for efficiently processing complex machine learning problems, including 100 billion parameter models, and scales easily to handle terabytes of training data or process multiple models simultaneously, while still utilizing the same programming model as would run on a single RDU.

There is reason to believe that language models are growing by a factor of 10 every year, and SambaNova even claims that its preliminary work and the results so far achieved demonstrate that running a trillion-parameter model is quite conceivable. Such headroom is needed, according to the firm, as trends for richer context and larger embeddings in natural language processing in particular are set to push infrastructure requirements beyond current limits.

This dataflow approach to processing workloads also has broader general applicability beyond machine learning, according to SambaNova, since parallel patterns can be used to represent the operators in SQL that are used for tasks such as data preparation and data analytics.

One early SambaNova customer is Lawrence Livermore National Laboratory (LLNL), which has integrated a DataScale system to its Corona supercomputing cluster, primarily used for simulations of various physics phenomenon.

According to Bronis de Supinski, Chief Technology Officer at LLNL, the DataScale platform is being used to explore a technique the scientists call cognitive simulation, whereby machine learning is used to accelerate processing of portions of the simulations. He claims that they are already seeing roughly a 5X improvement compared to a comparable GPU running the same models.

This work being pioneered at LLNL is likely to benefit numerous industries in future that also run physics simulations as part of their operations, such as oil and gas exploration, aircraft manufacturing and engineering.

In fact, machine learning looks set to play a greater part in almost all aspects of the computer industry in the future. Arti Garg, Head of Advanced AI Solutions & Technology at HPE, which counts SambaNova as a strategic partner, states that we are on the precipice of seeing much broader adoption, and this evolution is going to mean that AI impacts a lot more people than it currently does, and it will change expectations of what AI technologies are able to do.

As SambaNova’s Liang states, “We are at the cusp of a fairly large shift in the computer industry. It’s been driven by AI, but at a macro level, over the next 20-30 years, the change is going to be bigger than AI and machine learning.”

 

Sponsored by SambaNova

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now