Chinese Researchers One Step Closer to Parallel Turing Machine

Parallel computing has become a bedrock in the HPC field, where applications are becoming increasingly complex and such compute-intensive technologies as data analytics, deep learning and artificial intelligence (AI) are rapidly emerging. Nvidia and AMD have driven the adoption of GPU accelerators in supercomputers and other high-end systems, Intel is addressing the space with its many-core Xeon Phi processors and coprocessors and, as we’ve talked about at The Next Platform, other acceleration technologies like field-programmable gate arrays (FPGAs) are pushing their way into the picture. Parallel computing is a booming field.

However, the future was not always so assured. The field of parallel computing saw strong growth from the 1960s into the early 1990s, a multi-decade period that coincided with rise of research into the development of AI technologies. That first era of significant progress in parallel computing waned in the 1990s, enough that Ken Kennedy, a proponent of the technology, in 1994 gave a speech questioning whether parallel computing was coming to an end. At the same time, sequential computing – based on the von Neumann architecture – continued to see tremendous success since the first machine based on the model, called the EDVAC, debuted in 1946.

As noted above, since the mid-2000s, development of parallel computing has seen a strong rebirth. That said, a group of researchers from the Brown University, the University of Delaware and Tsinghua University in Beijing believe that the challenges that tripped up parallel computing in the mid-1990s shouldn’t be ignored even as the field has rapidly expanded over the past decade. In a recent research paper, the compute engineers argued that the lessons of the past should be studied to ensure that research into parallel computing doesn’t falter again and look at what has helped make sequential computing the roaring success that it’s been.

“We should remember the low points of the field more than 20 years ago and review the lesson that has led to the question at that point whether ‘parallel computing will soon be relegated to the trash heap reserved for promising technologies that never quite make it,’” the researchers wrote in their study, noting Kennedy’s concerns. “Facing the new era of parallel computing, we should learn from the robust history of sequential computation in the past 60 years.”

An interesting aside – at the same time that parallel computing was hitting a wall in the 1990s, doubts also were surfacing about the viability of AI. Such sources of public funds as DARPA and the National Science Foundation as well as private funders questioned the possibility of creating true AI. “The correlate fate of parallel computing and AI reveals an important fact —the tremendous computation demand of AI has inspired the advance of parallel computer architectures,” the researchers wrote. “A well known example is the impressive parallel machine products pursued by Thinking Machine Corporation in early 90s— its glorious (but short) history and its failure.”

The success of sequential computing was primarily based on the combination of the Turing machine model and the von Neumann architecture model that they wrote “specifies an abstract machine architecture to efficiently support the Turing machine model.” Based on this, the authors in their study outlined a proposed parallel Turing machine model that brings some of the key pillars of sequential computing systems – the concept of a simple, solid, easily understandable and broadly accepted sequential programming model – to a parallel system.

“Lacking a solid yet intuitive parallel Turing machine model will continue to be a serious challenge in the future parallel computing,” they wrote in their study, titled “Parallel Turing Machine, a Model.” They noted that “our proposed parallel Turing machine (PTM) model can preserve, whenever possible, those attractive properties of sequential computation — understandability, predictability and determinism — under parallel computation.”

There had been previous attempts to bring parallel computing a Turing machine, though the researchers said that in general, there has been relatively little interest in developing a strong parallel Turing machine. The work that has been done includes a bulk-synchronous parallelism (BSP) model to bridge hardware and software in a parallel computing model, and LogP, which focuses on the performance characteristics of parallel algorithms. However, both focused on the concept of threads, which is a key part of sequential computing but run against the rocks in parallel computing, they said. Other attempts in the 1970s and 1980s to marry parallel computing with the Turing machine model were hurt by a number of weaknesses, including insufficient support for open environments, in which “a computation may constantly interact with the environment external to the computation alongside with the data inside itself,” the wrote.

In their own parallel Turing machine (PTM) model, the researchers hope to ensure the key properties of sequential computing – understandability, predictability and determinism – in a parallel model. Those properties necessary for parallel programs need to outlined in a simple program execution model (PXM) that isn’t based on threads. The PXM needs to fill the bill as an efficient abstract architecture model and should provide both synchronous and asynchronous parallel program execution to support open environments to handle events from inside the machine and those created outside of the system. In addition, it also needs a robust interface to support high-level programming models and language processing. To hit all these goals, the authors propose a “program execution model based on an asynchronous execution model — codelet model, which [is] rooted in the dataflow model. We employ the event-driven codelet model and the memory model.”

The authors say the PXM includes a codelet-based parallel state machine that can describe a program in the same manner  as the finite state machine in a Turing machine. In addition, it includes shared address space memory and related memory consistency, which is used for synchronization. While there could be any number of implementations of an abstract architecture – the authors propose their own – they all would use the same PXM.

The researchers are hoping that the model they describe will encourage others to develop models for bringing parallel computing to Turing machines, with the idea of meshing the key tenets that have made sequential computation so successful and the benefits that parallel computing bring to modern compute-intensive workloads. They also note work in related areas that could help drive the field of parallel computing, including the codelet execution model that can be used to describe programs running in massively parallel systems and the memory consistence model to improve its capabilities in parallel environments. In addition, they pointed to the efforts in developing hardware and software to take advantage of work being done to create brain-like cognitive systems and neuromorphic chips.

For their own research, the authors suggested that the next steps could include creating a simulator for the PTM to see how the PXM and abstract architecture work and to show such attributes of the model as practicability and generality. The simulator also should enable them to create extensions for the system and to revise it.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

2 Comments

  1. A summary article like this needs to have some sort of citation in it. Through the whole article, the author refers to and quotes some researchers who are only identified as a group of researchers from “Brown University in Providence, R.I., the University of Delaware and Tsinghua University in Beijing” never once mentioning their names or citing their particular paper. It looks like an interesting bit of research being summarized but we need more than what is provided, the name of the researcher, the title of the paper or something.

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.