There are people who build big machines and then there are people who create the algorithms, libraries, and applications that harness them. At oil giant BP, for many years it was Keith Gray who created its monstrous machines, but it was John Etgen who put those machines through the paces running reservoir modeling and other complex and dense workloads.
Etgen gave the keynote address at the recent EnergyHPC conference at Rice University – what many of us knew as the Rice Oil & Gas conference, which focuses on the intersection between high performance computing and oil and gas exploration and drilling. And more than anything else, Etgen expressed his frustration that HPC is not moving fast enough and is not exciting as it has been in the past – in stark contrast to the explosion in innovation that is happening in the AI market with machine learning techniques.
Etgen calls explosion in innovation that HPC is lacking and that AI has a “singularity,” and we know that HPC already has a Singularity container controller environment and we know that singularity as defined by Ray Kurzweil and promoted by Masayoshi Son is a different thing entirely. For Etgen, this singularity the “idea that even after you normalize out any possible exponential, growth is still exponentially fast or better. Things are just improving at such a rapid rate that the future comes so quickly and is so transformative that the past is almost unrecognizable.”
We know what a singularity of the kind that Etgen is talking about looks like. We had one called the Industrial Revolution, and what happened was:
- We went from working on farms to working in factories and offices
- We went from custom and hand made goods to factory-built goods
- We went from being generalists who could do a lot of things reasonably well – enough to keep ourselves alive on the frontier – to being specialists who worked niches for money
- We created a system where machines make machines that make machines that make. . . .
- We employed “retro tech” where the end use of all of the sophistication in our manufactured objects is way lower than all of the inputs to create them
“Roughly human civilization went from riding horses to putting robots on Mars in 100 years,” he says. “Maybe it’s 150 or 200, because whatever. The human population went from subsistence farming or something like that – not large-scale agriculture, but small-scale agriculture – to basically everybody working in pretty specialized roles right over that course of time. More transition happened in the last hundred years than all of planetary history.”
And this rate of change in innovation and sophistication is simply not happening in HPC, Etgen declares. Despite the rise of more powerful computers (with supercomputers on the doorstep of delivering exaflops of compute performance) and software and such technologies as artificial intelligence and machine learning, scientific computing is not in a singularity event.
“It feels a little stale to me,” Etgen concedes, and he is not happy about saying this. “I think the answer is no, like not even just no. I mean, like not even close, like not even the same ballpark, not even the same game. Not the same sport. I mean, just forget it. We are actually crumbling. Scientific computing, numerical physics, things like that, it’s going down the drain and if we don’t change something about what we’re doing, it is going to go down the drain.”
Going down the drain is a kind of singularity, but not the one Etgen is looking for as he surveys the HPC field.
Instead, “I think actually we’re kind of plateauing. It’s making diminishing rates of progress as time goes on, not accelerating rates of progress. There are some positive signs, though, but I’m also kind of pessimistic.”
There are signs that would be evident if the scientific computing space were in singularity. People in the field would become even more specialized in what they do. Making complex things would become easier and there would be the use of what he calls “retro tech,” the “vast power and intelligence of a civilization to create interchangeable parts that can be used under a wide variety of maybe even unknown or unpredicted applications. It’s often used to make applications that are lower tech than the technology that made the parts.”
As an example, he pointed to 1/4-20 screws, which are made for almost nothing overseas and can be bought in the United States very cheaply at any hardware store. Those screws were key to enabling Etgen to build a rocket vehicle that he said was not extremely high tech.
“The rocket is using a bit of 18th century chemistry, has a 1930s-level metallurgy, but it has a bunch of those 1/4-20 screws in it,” he says. “Those were made in a very sophisticated factory. The factory is infinitely more technically advanced than that stupid rocket that I built at launch. It’s actually an example of retro tech: I built something simple, primitive and made out of what civilization produced, which is the ability to produce essentially all the 1/4-20 screws the world will ever need at zero cost. Yeah, when you go buy them at hardware store, they cost 2 cents or something, but effectively they’re zero cost to manufacture.”
So what are the signs that scientific computing is not where it could be? Etgen walked through the progression of computers he’s used since his early years, including the Cray-2 at his first job out of college, a CM-2 Connection Machine, a Convex C1 and some SGI systems and compute clusters. There were jumps in single-instance speeds over those 37 years and could be viewed as exponential, but not what would be expected in a singularity environment.
“How did that happen? One good thing is, they did figure out how to make it cheaper because a lot of the advantage came because I just bought a lot more instances,” he says. “In some sense, maybe I don’t care, maybe it’s fine if, in fact, there’s a lot more instances, but it actually forces us into a model of thinking about how to do scientific computing that may not have all the properties that we want.”
That said, if the field was in singularity, “you’d see an exploding diversity of applications. Do you feel like we’re seeing an exploding diversity of applications? In computational fluid dynamics, I think the answer is no. In computational seismology, the answer is definitely no. In reservoir engineering, I think the answer is no. Maybe there’s some other scientific fields that actually have an exploding diversity of applications in scientific computing. Maybe, but it’s kind of iffy. I’m not sure I see that.”
Then there are the jobs, or more to the point, the skills needed to get a job in the field.
“Here’s one that I think we’re definitely not seeing: human beings are not becoming increasingly more specialized,” he says. “The range of skills and tools you need to know to be effective and to get a job or get your degree or whatever it is, that range is remaining just as wide. That fights against an Industrial Revolution-like phenomenon that fights against the singularity. In my company, if I was standing up here recruiting for my company and following the actual company line, I’d be saying, ‘Oh no, you need to cross-skill. You need to be multi-talented, multi-disciplinary, all that stuff.’ That’s the fastest way to kill an Industrial Revolution-like phenomenon there is. Let people specialize. That was my personal opinion. You want Industrial Revolution-like singularity, like behavior? Let people hyper-specialize. What do you hear in society? ‘No, no, no, everybody’s got to be a generalist.’ OK, we’re trying to live back in the 1800s again.”
There is a trend toward complex objects getting easier to manufacture – both on the hardware side and in software – and in some areas retro tech is being put to good use.
“You have people using the immense Industrial Revolution capacity in computing to do really dumb, basic things,” Etgen says. “There are several different semiconductor factories that make op amps [operational amplifiers]. They make a little Arm CPUs. They make all sorts of other electronic components that I can buy for a penny each, or pretty close. And I can go on the Arduino website and download from GitHub, get the pieces I need to put the device drivers together, put all that stuff together, go into an IDE and build myself something that very, very skilled scientists were paid a lot of money to build in the 1960s and 1970s. The capability that I build isn’t any better than what was available in the 1960s. I can just build it myself on my kitchen table for a few dollars a part. That’s retro tech at work again. So the semiconductor industry kind of has some of the properties, but are they spilling over into scientific computing? It doesn’t seem like they are as much as I would like.”
There are some reasons to be optimistic, he said. One is the growing use of U-Net convolutional neural networks, something Etgen has witnessed as editor of the Journal of Geophysics, reviewing the papers that are published there.
“You know what I’m seeing? U-Net is taking over the damn world,” he says. “I mean, not every paper, but a large fraction of papers basically are copying this idea and they’re doing things with it that I’m sure the authors that invented that method had no idea that somebody would ever do that thing with their invention. The AI guys kind of get it. If you’re into single, pure AI, they kind of got the message, which is build tools that build tools that built tools that build tools, multiple levels of hierarchy. You can become highly specialized in what you do. You don’t need to know everything else because, actually, all the software is pretty transportable, pretty easy to learn. It’s easy to do your little piece and then a live on top of the foundation of whatever everybody else built. The AI guys, get it. But if you’re a hardcore computational physicist, you don’t get it. You’re not doing that. I don’t see that behavior, unless you’re using U-Net to replace numerical physics, which I hate, frankly. It’s not clear that I like that, but maybe they’re onto something.”
The space also is paralyzed to some degree by the human nature to root for the human versus the machine and the belief that there are things that people must do to remain people.
“The problem is, all of you who have the mentality, you’re hurting us, actually, if we’re going to be like the AI folks and start making faster and faster at ever increasing amounts of progress,” Etgen says. “There are not some things that men must do in order to remain men. In fact, that’s false. However, popular culture and your artistic talents and your sentiments fight against you. I have the same dilemmas in me. It is hard to let go of that human element of this, but here’s the real point. Go look at what your organization is capable of doing in terms of how much computing it can actually do. Is it seeing an exploding diversity of applications? Is it seeing accelerating value out of your investment in scientific and technical computing? If it’s not, then it’s stagnating. If that’s happening, you’re probably going to get some people to ask some different questions about what to do.”
So what’s needed? The hardware has to become more abstracted – “The more and more that you show me hardware that requires me to think very, very specifically about what the hardware is like, that’s a disaster” – as does the software. That’s something the people in AI have figured out.
In addition, while acting on some crazy ideas might be needed occasionally, “to make it sustainable – and here’s the thing that I personally find difficult to swallow – you’ve probably got to act like software engineers,” Etgen says. “That engineering mindset. We need to move all our mindsets down into that corner, where we’re looking at value, we don’t care about the beauty of the object, the science we’ll just accept as settled and we’re going to try to make something valuable by maximally leveraging other people’s work. You want a singularity in scientific computing? You’ve got to leverage other people’s work.”
The Julia language with its SciML ecosystem could be part of this https://sciml.ai/showcase/
Increasing specialisation equates to more focused expertise. That leads to greater efficiency in getting specific tasks done. “Specific” in generality means the steps, the means, and the endpoint are well defined from the outset. That’s fine but only up to a point.
It’s observable that in some fields, e.g. medicine, high concentration of expertise leads to technicians instead of multi-purpose professionals fully able to comprehend the broader context of their work. More generally, the age of the polymath ended with the nineteenth century. The industrial revolution was not merely result of effort by artisans with burgeoning new skills. This is illustrated in the development of optical instruments.
Highly skilled ‘opticians’, as they were then called, refined lens making and construction of compound microscopes. Prominent names include manufacturers Powell & Lealand, Tulley, and Baker. They were driven towards ‘perfection’ by people of broader perspective such as Goring in the early part of the century and Abbe, a collaborator with Zeiss, towards its end.
Initial notable advance was constructing multi-lens objectives with basic correction for colour dispersion and amelioration of other optical distortions. Opticians competed in constructing better lenses according to principle enunciated by savants. They engaged in hopeless quest for ever greater magnification without appreciation of how lens aperture determines resolving power and itself has a fixed maximum when light passes in air from a specimen to the objective lens. Empirical improvements, especially water immersion, were stumbled upon but it took Abbe to draw together a coherent explanation of image formation leading to considerable improvement of lens design and benefits of other immersion fluids.
Returning to the subject of the article, the author’s expectation of specialisation bringing about a “singularity” such as he ascribes to the Industrial Revolution ought be tempered by respect for the role of persons holding broad perspective of the field.