AI vs. Humanity: Game Over

Two recent research papers on the near-future of AI development use 216 pages of often impenetrable blather to tell us something that could be summarized in two words:

We’re screwed.

First, Google’s DeepMind published “An Approach to Technical AGI Safety and Security,” in which a gaggle of researchers muse about the impossibility of predicting and protecting against “harms consequential enough to significant harm humanity.”

They’re particularly interested in AGI, or Artificial General Intelligence, which doesn’t exist yet but is the goal of DeepMind’s research, along with its competitors. AGI promises a machine that can think and therefore act as flexibly and truly autonomously as a human being.

Their assumption is that there’s “no human ceiling for AI capability,” which means that AGIs will not only get as good as people at doing any tasks, but then keep improving. They write:

“Supervising a system with capabilities beyond that of the overseer is difficult, with the difficulty increasing as the capability gap widens.”

After filling scores of pages with technogibberish punctuated by frequent hyperlinked references to expert studies, the researchers conclude that something called “AI control” might require using other AIs to mange AIs (the implicit part being that DeepMind will happily build and sell those machines, and then more of them to watch over them, etc.).

Like I said, we’re screwed.

The second paper, “AI 2027,” comes from the AI Futures Project, a research group run by a guy who left OpenAI earlier this decade. The paper predicts AI superintelligence sometime in 2028 and games out the implications so that they read like a running narrative (DeepMind sees AGI arriving before 2030, too).

It reads like the script of “The Forbin Project,” or maybe just something written by Stephen King.

Granted, the researchers give us an updated, video game-like choice of possible endings — will it be the “Race Ending,” in which AI kills everyone, or “Slowdown Ending,” wherein coders figure out some way to overcome to structural impediments to control that DeepMind believes can’t be overcome? — but both eventualities rely on a superintelligent AI called OpenMind to, well, make up its own mind.

So, either way, it’s game over.

AI In Education: Just Say No

Illinois state legislators are looking to create rules for using AI in education and other public service areas, according to a story in the Chicago Tribune last week.

I can make it easy for them: Just say no.

Of course, it won’t happen. Illinois seems to be as confused about its role in the AI transformation of our lives as every other government, hobbled by the same “we need to use it responsibly” nonsense propagated by tech advocates, one of whom is quoted in the Tribune story.

The state has already passed legislation to ensure that AI isn’t used to break any laws that already exist, which seems kinda redundant, and it’ll be harder to catch it in the act because its violations will be far more deft and surreptitious than anything we biobags could muster.

Now, legislators are considering an “instructional technology board,” which would “provide guidance, oversight and evaluation for AI and other tech innovations as they’re integrated into school curricula and other policies.”

But teachers who take the time to learn about AI “shouldn’t be hemmed in by regulation,” cautioned the CEO of a corporation dedicated to speeding use of AI in classrooms. Expect hearings and more weighty observations made by various vested interests to follow.

What a cluster.

The idea that students or teachers can constructively outsource their study or work responsibilities to a thinking machine should be unthinkable. Just replace the label “AI” with “my really smart friend” and consider its applications: Teachers letting their smart friends write their classroom plans and grade their kids’ work. Students asking their smart friends to do the research and then write their papers.

We’d label those teachers and students as bad employees and cheats.

The thing is that faster and even more accurate or comprehensive work output is not the same thing as smarter and more impactful inputs. The point of education is the process of learning, not just throwing points up on the board. Outsourcing the tasks that constitute learning isn’t an improvement, it’s an abrogation of responsibility by both teachers and students.

The only thing that gets better in that equation is the AI, which learns how to operate more efficiently with every task it takes away from its human subjects.

This truth isn’t clear to some or most legislators and educators because AI is a complicated concept, so it’s kinda like your smart friend only kinda not, and because there’s a vocal lobby of academics and salesmen dedicated to telling everyone that their opinions, whether thoughtful or gut, are not valid.

Outsourcing learning to a machine seems bad? You don’t understand what you’re talking about, since “…innovative educators are circumventing outdated systems (to) utilize AI tools that they know enhance their teaching and their students’ learning,” according to a tech salesman quoted in the Tribune article.

So, just say yes.

Ultimately, the fact that there’s no real debate about what’s going on probably doesn’t matter, since teaching is one of the many jobs on the hit list of AI development.

Give it a decade or less and the debate will be about figuring out the role for human beings in education, if there even is one.