It’s Time To Stop Talking About Ethical AI

I participated in a panel on AI ethics at a conference in Amsterdam today and realized that we need to stop talking about AI ethics.

We need to talk more about ethics in general and business ethics in particular.

My conference experience amounted to an acid flashback to the early days of social media. Conferences were consumed with excitement about the new technology and all of the world’s problems it would solve. Experts appeared with grandiose proclamations about how all of “the old rules” of conduct and consequences no longer applied. Consulting firms promised expensive gigs to help companies set up new departments and practices for social that followed “the new rules.”

Fast-forward a few years later and all of those self-styled experts are gone, the stand-alone corporate groups have been folded into marketing departments, and the use of social media has wrecked unbelievable and likely irreversible harm on our public and private lives.

The promised public square of social media has become a cage-match, if not often simply a personal hell. 

Now, here I am at a conference surrounded by experts who lecture on doing AI right or ethically. Management consultants declare the new this or that, and are happily willing to take client money to chat about it. Companies are rushing headlong into implementing AI as if it was a special function or force of nature, populated by people who are about as controllable as wild animals.

Oh, and AI will solve all of the world’s problems.

Why don’t we talk about ethical transportation or toothpaste?

AI isn’t the same thing as a new widget or other technical artifact, for any number of reasons but primarily because it can possess the capacity to learn and choose to do new things. It’s as if your light switch could decide it wants to make coffee, or watch sports.

But corporations that use it are still corporations that use it. Every other thing that a company does, from sourcing and making things to employing people, is bound by laws, regulations, corporate practices and, most of all, the moral code of the folks that take actions on its behalf.

I’m not talking about a high-minded moral code, though I am kinda, but rather the principle that people should be responsible for their actions.

Consequences have consequences.

This basic premise underlies existing corporate operations. There’s no ethical transportation practice separate from the business practices of companies that make cars, parts, or related tech. Toothpaste is ethical insomuch that it conforms to the requirements of its makers to comply with the expectations of their regulators and those of their inner souls.

Companies are responsible for what they do. So are individuals. The idea isn’t separate from the overall business. It’s central to it.

Are the people coding AI no better than wild animals?

AI, like social media before it, is treated differently. Just take the concept of “explainability,” which in the AI world describes the need to identify and share things like data sources and the assumptions and priorities of your transformer. 

Substitute toothpaste for AI and you get the same requirement, don’t you? Companies have to explain where their stuff comes from and what it does. 

What’s different with AI is that we’ve been taught to assume that the people who code AI are no better than wild animals. They have no morals, no personal convictions beyond the singleminded pursuit of tech that does stuff, the bigger the better. There are noxious philosophies that support this assumption, like Effective Altruism, which misleads technologists to believe that they have the authority to solve any problem or right any perceived wrong.

There’s no good way to manage the AI that comes out of this approach, no way to surveil or control every thought or action every coder will have or make.

Is it time to get back to basics?

If we looked at AI as part of business operations and not as a stand-alone pursuit, we might better see its promises and shortcomings and empower organizations to do something about them. If we expected companies to be as responsible for its AI impacts as they are for the other outputs of their operations, wouldn’t that better their decisions involving AI? If we talked to AI developers and deployers as human beings and not techno-zealots and talked more about the ethics of behavior and consequences instead of some rarified version catering to the exigencies of AI, wouldn’t we see more ethical actions?

I think it’s time to stop talking about ethical AI and talk more about ethics and business operations.

PS: The sooner companies bring AI back into the bucket of other operations for which they’re responsible, the better they’ll be prepared for what’s to come. Here’s a prognostication worth exactly what you paid for it: In a few years the greatest modifier of business behavior relating to AI won’t be government regulation but the liability expectations of the public markets. Blather about some special dispensation for AI that’s responsible in an otherwise utterly irresponsible way will hold no water in courts.

You heard it here first. 

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *