Contrary to reports, OpenAI may not be building an AI that threatens humanity

Has OpenAI invented an AI technology that has the potential to “threaten humanity”? From some recent headlines, you might think so.

Reuters and The information It was first reported last week that several OpenAI staff, in a letter to the AI ​​startup’s board of directors, flagged the “ingenuity” and “potential danger” of an internal research project known as ” Q*.” This AI project, according to the report, can solve some mathematical problems – even at the grade-school level – but there is in the opinion of the researchers a chance to build towards an elusive technical solution. progress.

There is currently a debate as to whether OpenAI’s board received such a letter – Quoted by The Verge a SOURCE suggests that it does not. But Q* framing aside, Q* actually may not be as big — or threatening — as it sounds. It may not be new.

AI researchers at X (formerly Twitter) including Yann LeCun, Meta’s chief AI scientist Yann LeCun, immediately doubted that Q* was anything more than an extension of existing OpenAI work – and other AI research laboratories. In a post on X, Rick Lamers, who writes the Substack newsletter Coding with Intelligence, TEACH in a guest lecture at MIT OpenAI co-founder John Schulman gave seven years ago in which he described a mathematical function called “Q*.”

Many researchers believe that the “Q” in the name “Q*” refers to “Q-learning,” an AI technique that helps a model learn and improve on a particular task by making – and be rewarded for – specific “correct” actions. The researchers say that the asterisk, on the other hand, can be a reference to A*, an algorithm for examining the nodes that make up a graph and exploring the routes between these nodes.

Both have been around for a while.

Google DeepMind applied Q-learning to create an AI algorithm that could play Atari 2600 games at a human level… in 2014. A* has its origins in an academic paper published in 1968. And researchers at UC Irvine a few years ago searched improving A* with Q-learning — which may be exactly what OpenAI is chasing now.

Nathan Lambert, a research scientist at the Allen Institute for AI, told TechCrunch that he believes Q* is connected to AI methods “mostly (for) studying high school math problems” — not breaking the people.

“OpenAI even shared work earlier this year that improved mathematical reasoning in language models with a technique called process reward models,” Lambert said, “but what remains to be seen is what better math abilities can do anything but make (OpenAI’s AI-powered chatbot ) ChatGPT a better code assistant.”

Mark Riedl, professor of computer science at Georgia Tech, was equally critical of the reporting by Reuters and The Information on Q* – and the broader media narrative around OpenAI and its pursuit of artificial general intelligence (ie AI that can do any task as well as a human). Reuters, citing a source, means that Q* could be a step towards artificial general intelligence (AGI). But researchers – including Riedl – dispute this.

“There is no evidence to suggest that large-scale language models (such as ChatGPT) or any other technology developed by OpenAI are in the way of AGI or any other destruction scenarios,” Riedl told TechCrunch. “OpenAI itself is best a ‘fast follower,’ taking existing ideas … and looking for ways to add to them. While OpenAI hires top-rate researchers, much of what they do can be done by researchers in other organizations.This can also happen if OpenAI researchers are in a different organization.

Riedl, like Lambert, did not guess whether Q* might involve Q-learning or A*. But if it’s related — or a combination of the two — it’s in line with current trends in AI research, he said.

“These are all ideas that other researchers across academia and industry are actively pursuing, with several papers on these topics in the last six months or so,” Riedl added. “It’s unlikely that OpenAI researchers have ideas that aren’t also captured by many researchers who are also pursuing AI development.”

That’s not to suggest that Q* — reportedly associated with Ilya Sutskever, OpenAI’s chief scientist — might not move the needle.

Lamers stated that, if Q* uses some of the techniques described in a ROLE published by OpenAI researchers in May, could “significantly” increase the capabilities of language models. Based on the paper, OpenAI may have discovered a way to control the “chains of reasoning” in language models, Lamers said – enabling them to guide the models to follow better and logically sound “paths” to achieve results.

“This makes it less likely that the models are following ‘foreign human thinking’ and false patterns to reach malicious or false conclusions,” Lamers said. “I think this is a real win for OpenAI in terms of alignment… Most AI researchers agree that we need better ways to train these big models, so that they are more use information efficiently.

But whatever comes out of Q*, it – and the relatively simple mathematical equations it solves – do not spell doom for humanity.

Leave a comment