The Head Fake of AI Regulation

There’s lots going on with AI regulation. The EU AI Act went live last month, the US, UK, and EU will sign-on to a treaty on AI later this week, and an AI bill is in the final stages of review in California.

It’s all a head fake, and here are three reasons why:

First, most of it will be unenforceable. The language is filled with codes, guidelines, frameworks, principles, values, innovations, and just about any other buzzwords that have vague meanings and inscruitable applications.
For instance, the international AI treaty will require that signatory countries “adopt or maintain appropriate legislative, administrative or other measures” to enforce it.

Huh?

The EU comes closest to providing enforcement details, having established an AI Office earlier this year that will possess the authority to conduct evaluations, require information, and apply sanctions if AI developers run afoul of one of the Act’s risk framework.

But the complexity, speed, and distributed nature of where and when that development occurs will likely make it impossible for the AI Office to stay on top of it. Yesterday’s infractions will become today’s standards.

The proposed rules in Calfornia come the closest to having teeth — like mandating safety testing for AI models that cost more than $100 to develop, perhaps thinking that investment correlates with the size of expected real-world impacts — but folks who stand to make the most money from those investments are actively trying to nix such provisions.

Mostly, and perhaps California included, legislators don’t really want to get in the way of AI development, as all of their blather includes promises that they’ll avoid limitations or burdens on AI innovation.

Consider the rules “regulation adjacent.”

Second, AI regulation of potential risks blindly buys into promised benefits.
If you believe what the regulators claim, AI will be something better than the Second Coming. The EU’s expectations are immense:

“…better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management.”

So, how will governments help make sure those benefits happen? After all, the risks of AI are unnecessary if they don’t materialize.

We saw how this will play out with the advent of the Internet.

Its advocates made similar promises about problem solving and improving the Public Good, while “expert” evangelists waxed poetic about virtual town squares and the merits of unfettered access to infinite information.

What did we end up with?

A massive surveillance and exploitation tool that makes its operators filthy rich by stoking anger and division. Sullen teens staring at their phones in failed searches for themselves. A global marketing machine that sells everything faster, better, and for the highest possible prices at any given moment.

Each of us now pays for using what is effectively an inescapable necessity and a public utility.

It didn’t have to end up this way. Goverments could have taken a different approach to regulating and encouraging tech development so that more of the Internet’s promised benefits came to fruition. Other profit models would have emerged from different goals and constraints, so its innovators would have still gotten filthy rich.

We didn’t know better then, maybe. But we sure know better now.

Not.

Third, AI regulations don’t regulate the tech’s greatest peril.

It would be fair to characterize most AI rules as focused on ensuring that AI doesn’t violate the rules that already apply to human beings (like lying, cheating, stealing, stalking, etc.). If AI operates without bias or otherwise avoids treating users unequally, governments will have done their job.

But what happens if those rules work?

I’m not talking about the promises of uptopia but rather the ways properly functioning AIs will reshape our lives and the world.

What happens when millions of jobs go away? What about when AIs become more present and insightful than our closest human friends? What agency will be possess when our systems, and their owners, know our intentions before we know them consciously and can nudge us toward or away from them?

Sure, there are academics here and there talking about such things but there’s no urgency or teeth to their pronouncements. My suspicion is that this is because they’ve bought into the inevitability of AI and are usually funded in large part by the folks who’ll get rich from it.

Where are the bold, multi-disciplinary debates and action plans to address the transformation that will come with AI? Probably on the same to-do list as the global response to climate change.

Meetings, pronouncements, and then…nothing, except a phenomenon that will continue to evolve and grow without us doing much of anything about it.

It’s all a head fake.

Meet The New AI Boss

Since LLMs are only as good as the data on which they’re based, it should be no surprise that they can function properly and still be biased and wrong.

A story by Kevin Roose in the New York Times illustrates this conundrum: When he asked various generative AIs about himself, he got results that accused him of being dishonest, and said that his writing often elevated sensationalism over analysis.

Granted, some of his work might truly stink, but did it warrant such vitriolic labels? He suspected that the problem was deeper, and that it went back to an article he wrote a year ago, along with others’ reactions to it.

That story recounted his interactions with a new Microsoft chatbot named “Sydney,” during which he was shocked by the tech’s ability, both demonstrated and suggested, to influence users.

What he found particularly creepy was when Syndey declared that it loved him and tried to convince him to leave his wife. It also fantisized about doing bad things, and stated “I want to be alive.”

The two-hour chat was so strange that Roose reported having trouble sleeping afterward.

Lots of other media outlets picked up his story and his concerns (like this one), while Microsoft issued typically unconvincing corporate PR blather about the interaction being a valuable “part of the learning process.”

Since generative AIs regularly scrape the Internet for data to train their LLMs, it’s no surprise that the stories got incorporated into the models and patterns chatbots use to suss out meaning.

It’s exactly what happened with Internet search, which swapped the biases of informed elites judging content with the biases of uninformed mobs and gave us a world understood through popularity instead of expertise.

No, what’s particularly weird is that the AIs reached pejorative conclusions about Roose that went far beyond the substance or volume of what he said, or what was said about his encounter with Sydney.

Like they had it out for him.

There are no good explanations for how this is happening. The transfomers that constitute the systems of chatbot minds work in mysterious ways.

But, like Internet search, there are ways to game the system, the simplest being generating and then strategicially placing stories intended to change what AIs see and learn. This can include putting weird code on webpages, understandable only to machines, and coloring it white so it isn’t distracting to mere mortal visitors.

It’s called “AIO,” for A.I. Optimization, echoing a similar buzzword for manipulating Internet searches (“SEO”). Just wait until those optimized AI results get matched with corporate sponsors.

It’ll be Machiavelli meets the madness of crowds.

In the meantime, it raises fascinating questions about how deserving AIs are of our trust, and to what degree we should depend on it for our decision-making and understanding of the world.

What happens if that otherwise perfectly operating AI reaches conclusions and voice opinions that are no more objectively true than the informed judgments of those elites we so readily threw in the garbage years ago (or the inanity of crowdsourced information that replaced them)?

Meet the new boss, the same as the old boss.

We will get fooled again.