Two recent research papers on the near-future of AI development use 216 pages of often impenetrable blather to tell us something that could be summarized in two words:

We’re screwed.

First, Google’s DeepMind published “An Approach to Technical AGI Safety and Security,” in which a gaggle of researchers muse about the impossibility of predicting and protecting against “harms consequential enough to significant harm humanity.”

They’re particularly interested in AGI, or Artificial General Intelligence, which doesn’t exist yet but is the goal of DeepMind’s research, along with its competitors. AGI promises a machine that can think and therefore act as flexibly and truly autonomously as a human being.

Their assumption is that there’s “no human ceiling for AI capability,” which means that AGIs will not only get as good as people at doing any tasks, but then keep improving. They write:

“Supervising a system with capabilities beyond that of the overseer is difficult, with the difficulty increasing as the capability gap widens.”

After filling scores of pages with technogibberish punctuated by frequent hyperlinked references to expert studies, the researchers conclude that something called “AI control” might require using other AIs to mange AIs (the implicit part being that DeepMind will happily build and sell those machines, and then more of them to watch over them, etc.).

Like I said, we’re screwed.

The second paper, “AI 2027,” comes from the AI Futures Project, a research group run by a guy who left OpenAI earlier this decade. The paper predicts AI superintelligence sometime in 2028 and games out the implications so that they read like a running narrative (DeepMind sees AGI arriving before 2030, too).

It reads like the script of “The Forbin Project,” or maybe just something written by Stephen King.

Granted, the researchers give us an updated, video game-like choice of possible endings — will it be the “Race Ending,” in which AI kills everyone, or “Slowdown Ending,” wherein coders figure out some way to overcome to structural impediments to control that DeepMind believes can’t be overcome? — but both eventualities rely on a superintelligent AI called OpenMind to, well, make up its own mind.

So, either way, it’s game over.

Recommended Posts