There’s lots going on with AI regulation. The EU AI Act went live last month, the US, UK, and EU will sign-on to a treaty on AI later this week, and an AI bill is in the final stages of review in California.

It’s all a head fake, and here are three reasons why:

First, most of it will be unenforceable. The language is filled with codes, guidelines, frameworks, principles, values, innovations, and just about any other buzzwords that have vague meanings and inscruitable applications.
For instance, the international AI treaty will require that signatory countries “adopt or maintain appropriate legislative, administrative or other measures” to enforce it.

Huh?

The EU comes closest to providing enforcement details, having established an AI Office earlier this year that will possess the authority to conduct evaluations, require information, and apply sanctions if AI developers run afoul of one of the Act’s risk framework.

But the complexity, speed, and distributed nature of where and when that development occurs will likely make it impossible for the AI Office to stay on top of it. Yesterday’s infractions will become today’s standards.

The proposed rules in Calfornia come the closest to having teeth — like mandating safety testing for AI models that cost more than $100 to develop, perhaps thinking that investment correlates with the size of expected real-world impacts — but folks who stand to make the most money from those investments are actively trying to nix such provisions.

Mostly, and perhaps California included, legislators don’t really want to get in the way of AI development, as all of their blather includes promises that they’ll avoid limitations or burdens on AI innovation.

Consider the rules “regulation adjacent.”

Second, AI regulation of potential risks blindly buys into promised benefits.
If you believe what the regulators claim, AI will be something better than the Second Coming. The EU’s expectations are immense:

“…better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management.”

So, how will governments help make sure those benefits happen? After all, the risks of AI are unnecessary if they don’t materialize.

We saw how this will play out with the advent of the Internet.

Its advocates made similar promises about problem solving and improving the Public Good, while “expert” evangelists waxed poetic about virtual town squares and the merits of unfettered access to infinite information.

What did we end up with?

A massive surveillance and exploitation tool that makes its operators filthy rich by stoking anger and division. Sullen teens staring at their phones in failed searches for themselves. A global marketing machine that sells everything faster, better, and for the highest possible prices at any given moment.

Each of us now pays for using what is effectively an inescapable necessity and a public utility.

It didn’t have to end up this way. Goverments could have taken a different approach to regulating and encouraging tech development so that more of the Internet’s promised benefits came to fruition. Other profit models would have emerged from different goals and constraints, so its innovators would have still gotten filthy rich.

We didn’t know better then, maybe. But we sure know better now.

Not.

Third, AI regulations don’t regulate the tech’s greatest peril.

It would be fair to characterize most AI rules as focused on ensuring that AI doesn’t violate the rules that already apply to human beings (like lying, cheating, stealing, stalking, etc.). If AI operates without bias or otherwise avoids treating users unequally, governments will have done their job.

But what happens if those rules work?

I’m not talking about the promises of uptopia but rather the ways properly functioning AIs will reshape our lives and the world.

What happens when millions of jobs go away? What about when AIs become more present and insightful than our closest human friends? What agency will be possess when our systems, and their owners, know our intentions before we know them consciously and can nudge us toward or away from them?

Sure, there are academics here and there talking about such things but there’s no urgency or teeth to their pronouncements. My suspicion is that this is because they’ve bought into the inevitability of AI and are usually funded in large part by the folks who’ll get rich from it.

Where are the bold, multi-disciplinary debates and action plans to address the transformation that will come with AI? Probably on the same to-do list as the global response to climate change.

Meetings, pronouncements, and then…nothing, except a phenomenon that will continue to evolve and grow without us doing much of anything about it.

It’s all a head fake.

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *