AI Makes Us Peasants

Amidst all the speculation about AI solving problems great and small (while potentially destroying all of humanity in the process), we’ve lost sight of what it’s already doing to our work and lives.

It’s remaking us into peasants.

I’m thinking about peasantry in a broad, one-eye-closed, thematic kinda way: Folks who rented instead of owned and sold their labor for what the markets would pay,  whether doing so on rural fields or in urban factories (think gig workers with a less-sexy title). 

The central component of their lot was that they had no voice in the decisions that defined their lives; they did what they were told to do and had little to no expectation of changing their circumstances (the occasional revolutions notwithstanding).

Call them peasants, serfs, vassals, workers or the proletariat, and cite any theorist to quibble over who was in what bucket, but the common theme was that they were people who lived within constraints not of their own choice or making. 

The rules of the road were presented to them as inevitable and, oddly, for their own good, as they were set by people of greater intellect, vision, and means.

Isn’t that what AI promises to do to/for us?

I’ve had this conversation recently with friends and one replied “But we’re in charge of how we use it, aren’t we?”

Nope, for at least three reasons:

First, when an AI tees-up a response to a search query or a draft edit of a document, it’s constraining your choice options, as you are unaware of what other restaurants might have just missed its list or adjectives it rejected for your report or term paper. You also don’t know how it made those decisions for you.

There is no such thing as an unbiased AI, as it makes choices for you by definition.

Second, you’re being trained to believe and, in a word, obey its pronouncements because it is smarter than you are, even though its intelligence is limited to the array of data it possesses (just like you). What AIs will get increasingly good at it isn’t knowing the “right” answers in any objective sense but rather learning what answers or choices you will follow. 

AI isn’t an information machine, it’s a guidance engine.

Third, AI will increasingly make decisions for you that will be invisible or incontestable: What healthcare benefits you receive, whether or not you get or keep a job, even who’ll be in the next work meeting or crowded bar. It will inform decisions made by your government just as it influences what your family and friends think and do.

AI will become the mediator between you and your world, your interactions filtered through and from it.

Why don’t we talk more openly and honestly about this “progress” in our lives?

Primarily because the money is in making it happen and its use in improving commercial activity — from lowering the cost of production by replacing people with bots and raising the success rate of selling stuff customized to consumers’ tastes, for starters — can be readily valued on balance sheets.

Already, simply selling the stuff, or the components that promise it, has been credited with lifting entire stock markets.

Opposition to the “progress” is disorganized and unfunded, and there’s no easy way to value things like “freedom” or “human dignity.”

So, as we’re presented with every new use of AI and its promoters hammer us with declarations about empowerment and improvements in our lives, we will give something up. We’ll be in charge of one less thing. One less set of facts or opinions. One less decision.

And we will have taken one more step toward becoming peasants. 

AI Testing = “What, Me Worry?”

OpenAI has decided to cut testing of its newest, most powerful AI models from months to only a few days and will start rolling out one of them, called “o3,” sometime this week.

Testing assesses LLMs’ vulnerabilities to do biased or illegal things and trains them to be more resilient to the lures of violence or crime (a process called “fine-tuning”).

GPT-4 was tested for six months before its launch in 2023.

The company is unconcerned, citing the lack of agreed safety standards and that components and earlier versions of its supercharged models were tested at certain checkpoints, so it was “confident that its methods were the best it could do” (according to the Financial Times).

In other words, its new policy is “What, Me Worry?”

Testing was never the right or complete answer to the question of AI risk, since it was always a snapshot of a particular condition and moment. Systems evolve, especially those that are designed to do so, and the predictability of their performance goes down in lockstep with the size and diversity of their actions over time.

Promises that an AI wouldn’t do something bad in the future were no more dependable than claims that it won’t rain a year from now, or that an honors high schooler won’t cheat on her taxes someday.

We tolerate these risks because we’ve also been promised that AI will do great things, with all of the conversations packed with technogibberish so we can’t question them.

The risk of an AI doing something awful has always been, well, awful.

Worse, what happens if OpenAI’s o3 and whatever comes next from them and their competitors don’t break any laws or insult anybody? What if they do exactly what they’re supposed to do?

These models have been purposefully designed to do more, and do it faster and more often, and thereby ingratiate their decision making — both in response to our queries and, with evermore regularity, anticipate and guide our questions and subsequent actions — into every aspect of our work and personal lives.

How could anyone test for those consequences? We were already participants in the biggest open-ended experiment in human history, and the boffins conducting it have never offered to take responsibility for its outcomes.

What, me worry?

All the time.

AI vs. Humanity: Game Over

Two recent research papers on the near-future of AI development use 216 pages of often impenetrable blather to tell us something that could be summarized in two words:

We’re screwed.

First, Google’s DeepMind published “An Approach to Technical AGI Safety and Security,” in which a gaggle of researchers muse about the impossibility of predicting and protecting against “harms consequential enough to significant harm humanity.”

They’re particularly interested in AGI, or Artificial General Intelligence, which doesn’t exist yet but is the goal of DeepMind’s research, along with its competitors. AGI promises a machine that can think and therefore act as flexibly and truly autonomously as a human being.

Their assumption is that there’s “no human ceiling for AI capability,” which means that AGIs will not only get as good as people at doing any tasks, but then keep improving. They write:

“Supervising a system with capabilities beyond that of the overseer is difficult, with the difficulty increasing as the capability gap widens.”

After filling scores of pages with technogibberish punctuated by frequent hyperlinked references to expert studies, the researchers conclude that something called “AI control” might require using other AIs to mange AIs (the implicit part being that DeepMind will happily build and sell those machines, and then more of them to watch over them, etc.).

Like I said, we’re screwed.

The second paper, “AI 2027,” comes from the AI Futures Project, a research group run by a guy who left OpenAI earlier this decade. The paper predicts AI superintelligence sometime in 2028 and games out the implications so that they read like a running narrative (DeepMind sees AGI arriving before 2030, too).

It reads like the script of “The Forbin Project,” or maybe just something written by Stephen King.

Granted, the researchers give us an updated, video game-like choice of possible endings — will it be the “Race Ending,” in which AI kills everyone, or “Slowdown Ending,” wherein coders figure out some way to overcome to structural impediments to control that DeepMind believes can’t be overcome? — but both eventualities rely on a superintelligent AI called OpenMind to, well, make up its own mind.

So, either way, it’s game over.