Site icon The Next Platform

The Crazy Eights Of Large Language Models

We read a fairly large number of technical papers here at The Next Platform, and it is a rare thing indeed when we can recommend that everyone – or damned near everyone – should read a paper. But Sam Bowman – it would be funnier if his name was Dave – has published a paper that anyone in the high performance computing arena in its many guises, and indeed anyone who is a citizen of the modern world, should take a look at as they ponder the effect that large language models are having on Earth.

Bowman is an associate professor of data science, linguistics, and computer science at New York University and is also doing a sabbatical at AI startup Anthropic. He got his BA in linguistics in 2011 from the University of Chicago and a PhD in linguistics with a focus on natural language processing from Stanford University in 2016, and has published a fairly large number of papers in relatively short period of time. And to his credit, Bowman has put himself out there after reviewing a large corpus of papers on LLMs to give us Eight Things To Know About Large Language Models as his latest work, which as far as we can tell was not written by GPT-3 or GPT-4. (We are getting to the point where you might have to ask for certifications to that effect on all documents.)

This paper does not get into the low level math of any particular AI model, and it doesn’t attempt to make any value judgments about AI, either. This is strictly pointing out the kinds of things we need to know about LLMs before we go shooting our mouths off about what they are and what they are not.

Having said that, Bowman does point to some survey data that cautions that we need to control AI just to remind us that the stakes for humanity are high, perhaps with the planet’s sixth extinction level event and one of our own making looming. Bowman doesn’t say this directly, of course, but points to survey data from polls of AI researchers last year that suggests there is a greater than 10 percent chance of “human inability to control future advanced AI systems causing human extinction,” and 36 percent of researchers admitting that “It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.” Without saying how he feels about a six-month moratorium on further AI research proposed by Steve Wozniak, Elon Musk, and others (which is rich in irony), Bowman at least mentions that people are talking about it.

The Eight Things will no doubt be updated and expanded, but here is where Bowman wants us to start:

OpenAI was supposed to be an organization to responsibly steer the development of artificial intelligence, but as Thing 1 on the list above kicked in, Microsoft kicked in $1 billion and access to a vast pool of Azure resources that actually proved Thing 1 is correct. GPT-3 and now GPT-4 get better at this generative AI trick that seems to mimic human thought in some degree as you throw more iron and more data at it. Figuring out what the next word in a sequence is, which many of our brains do for fun and which is annoying to those who do not do this, I can assure you, because my brain loves that game, is what got us here, but twisting this thing to do something that is akin to synthesis and imagination – sometimes hallucination and sometimes outright lying, sandbagging, and sucking up – is where we are at today.

It is crazier than we might be thinking it is, and it might get less crazy or crazier over time as we throw more iron at it to drive LLMs to greater levels of “understanding.”

Our point is not to argue these points here. But rather, to simply tell you to read the paper on this day when you probably didn’t want to do any work anyway. It is interesting and important. Here is why, as Bowman put it:

“While LLMs fundamentally learn and interact through language, many of the most pressing questions about their behavior and capabilities are not primarily questions about language use. The interdisciplinary fields studying AI policy and AI ethics have developed conceptual and normative frameworks for thinking about the deployment of many kinds of AI system. However, these frameworks often assume that AI systems are more precisely subject to the intentions of their human owners and developers, or to the statistics of their training data, than has been the case with recent LLMs.”

Comforting, isn’t it?

Special bonus list: The Top Five Extinction Level Events On Earth So Far Plus One Potential Bonus:

There’s only one thing we know for sure. You can’t put this AI Cat Back Into His Hat.

Exit mobile version