After more than two years of haggling, negotiators in the EU have reached an agreement on a deal that would provide principles for the framework of a draft bill that will be debated and changed by legislators before it passes into law someday.
In the realm of governmental AI oversight, it’s being heralded as a breakthrough both in speed and certainty.
It’s easy to find fault in it, so I looked for three positives that might give us a least some consolation:
First, it requires disclosing use of chatbots or AI recognition tools.
The agreement proposes “legally binding rules” that will require tech companies to let us know when we’re interacting with robots. Transparency is a huge problem, as it’s likely AI will replace human staffers in more and more work positions (just like it’s already being used in online customer service). Look for AI to troll dating sites soon, if they’re not already there gathering user data surreptitiously.
AI is a lot better than human beings at understanding our needs and desires and crafting ways to manipulate our desires and actions. For every AI that does stuff for us, like helps diagnose health problems or facilitates a document search in some musty data warehouse, there’ll be one (or more) chatbots that are coded to do stuff to us using biometric or other recognition systems.
The chance to know we’re being used is a nice precedent, even if it’ll get increasingly difficult for us to opt-out of using services because of it. And “legally binding rules” and the potential for hefty fines for violators still beg endless questions about enforcement.
My fantasy is that the EU agrees to build a super computer police force to fight fire with fire, but we know how that movie ends (not well).
Second, citizens of EU countries will have the right to challenge the conclusions reached by AI and demand explanations.
This is another huge issue that’s complicated by the fact that experts are already unable to explain how AIs have made certain decisions (like innovating puzzle challenges or even functioning as if they’re aware that they’re functioning, the latter supposedly technically impossible).
More disclosure on how decisions are made should be empowering for us, but will it go beyond overt examples of bias or inaccuracies of judgment and on what facts will we base any of our challenges? Unfairness, like beauty, rests in the eye of the sufferer.
Further, what will our right to challenge AIs’ conclusions empower us to do about it? Again, as more vital services are outsourced to machine intelligence, it will get ever-harder to opt-out of using them, so what will be our recourse? Fines? Changes to the systems?
Chump change and technical tweaks do not a regulatory regime define.
Also, I’ve got to say that I’m often left dumbfounded by the way other human beings reach decisions. Ditto for existing technology tools, like the automated spit-outs that decline mortgage applications or bury your website ranking on page 100 of search results. The EU hagglers should apply the same rigor they threaten for AI to the rest of the processes that impact our lives.
Third, tech companies have two years to implement the rules if and when they’re official.
Two years or more is an eternity since AI development seems to progress every minute. Just think of all the conversations we were having two years ago about it.
So how is this a positive? Well, and this is a bit of a stretch, it illustrates the frailty of AI oversight as a policing action. Not only does the development of far-reaching policies take too long but the threat of punishment isn’t enough to force compliance in any industry or activity; just think about how many people drive over the speed limit or companies that skimp on paying their taxes.
The worst offenders are often the least concerned with negative consequences of their actions.
The only reason laws work is that the vast majority of us choose to obey them, almost unconsciously. We don’t need to be convinced to abandon our instincts or proclivities. We just know that it’s the right thing to do.
AI developers are not similarly constrained: in fact, they’re either uninterested in any limits on their endeavors, subscribe to some inane philosophy that anoints them god-kings, or are blinded by a provenly tragic belief that their good intentions provide enough cover for their efforts and souls.
You can’t police people who don’t believe in the police.
You can only chase them. React when they’ve done something problematic. Hope that they’re not chancing on doing something horrible. Keep building higher and more threatening fences in a doomed race to stay ahead of their ability to jump and run away.
The ultimate takeaway from the EU’s proposed deal is that it should be obviously and irrefutable proof that governments can’t save us from AI.
We need to shift the conversation from legislating AI to demanding more from the folks inventing and profiting from it. We need to institute crash courses in moral and ethical education for the next generation of developers (who’re now raised in a literal STEM vacuum that’s detached from such liberal arts distractions) and provide tangible and compelling training for the developers currently at the helm.
We need to scrutinize AI’s impacts more closely and not simply get blinded by its obvious benefits. Our lives as citizens should be far more important to us than our roles as consumers, which means holding AI promoters accountable for the social, economic, political, and even psychological impacts of their innovation.
Better Internet search or quarterly earnings? Tell me how many jobs were eliminated to provide me with that convenience or profit. Faster access through turnstiles or getting my homework graded? Who loses access or is otherwise hurt by giving me those benefits?
Let the folks behind AI get filthy rich but refuse to let them outsources the consequences and costs of their enrichment to us. The EU is silent on such things, so we need to speak up.
Knowing that truth is a positive takeaway, I guess.