The European Parliament has taken the first step toward adoption of a set of regulations called the “AI Act.”

It classifies risks ranging from “minimal to none” to “unacceptable,” and proposes varying combinations of review, disclosure requirements, management oversight, and penalties for misuse for four levels of danger. 

Unacceptable risks cover things like social scoring by governments and toys that tell kids to murder their parents. High-risk AI systems include pretty much any application in government or business. Limited and minimal/no risk uses are things like spam filters and video games that let players murder their virtual parents.

But none of the risks described in the proposed regulations are “specifically created by AI applications,” as its description claims.

Governments have surveilled their citizens for generations. Public and private services administer health and education services use checklists and other scores. Companies vet would-be employees and loan applicants. Individuals make drawn-out decisions about finances and have split-second reactions when they drive. 

Our lives are dictated by the imperfect, biased, and even capricious oversight of biological intelligence. It’s hard to imagine the worry is that AI could do these things less well than people already do them. 

In fact, no human being could pass muster if judged by the rules in the legislative plan (except maybe coding video games). Trying to hold people accountable would prompt rioting in the streets.

So, maybe the EU’s plan is to replace even well-intentioned bureaucrats and corporate employees with AI that can be more closely regulated?

It’s not an inconceivable idea.

Maybe businesses are breathing a collective sigh of relief that the proposed regulations simply affirm their own goals. Replacing tens of millions of employees with robots is key to most forecasts for productivity and profit growth over the next decade.

All the AI Act does is help ensure that those robots work properly.

My prediction is that we’ll hear griping from companies about the burden of compliance, but secretly they’ll be thrilled. Checking the appropriate boxes will free them to do what they want to do with AI, as well as free AI to iterate and evolve to do more for them. And to us.

There’s no way that the EU rules as currently described can keep pace with either activity, just as it doesn’t even acknowledge the real risks of AI transforming how we work and live into a neatly choreographed mechanical ballet.

The real risk of AI is that it functions exactly as its developers and salespeople hope.

Recommended Posts

No comment yet, add your voice below!


Add a Comment