I can’t seem to avoid seeing stories about ethical AI, all of which argue for the importance of precluding bias in its decision making.

It’s the wrong argument.

First off, unbiased AI is impossible, since the people who code AI don’t know the range or depth of their biases. Few of us can even agree on what a bias is or isn’t, or whether some aren’t quite good (like biases to truth or life). 

A bias today could be a harmless distraction tomorrow. A current bias could blind us to a future problem.

Second, AI builds upon that biased code it with its own experiences and learning. The likelihood that those potential inputs and conclusions could be anticipated is low. And the whole point is to empower AI to get better on its own.

The trick is to make sure we don’t implement or follow it blindly.

Worse, biased people can still do biased things with data, irrespective of how perfect the data might be. They might even do it unconsciously.

So, is ethical AI all about avoiding algorithmic bias?

The stories about it are written by reporters who make biased decisions all the time. The AI in question will be used to replace employees who currently make decisions. They’re  biased, too.

I think that posing the question in those terms is itself revealing of bias. Or maybe something approximating religious belief. 

All hail our machine savior. 

Being ethical is hard. We fail at it most of the time and succeed only now and then, mostly because it involves complex and difficult behaviors that often appear counterintuitive or biased against one’s self.

Empathy. Restraint. Fairness. Doing what’s right when nobody is looking. Not just following the rules but embracing them. Going beyond what’s required to achieve what’s desired. 

Ethics are a social construct, unlike morality which is individual and laws which are structural. Laws may govern what we do but ethics explain why, and morality inspires the how and when.

In technology terms, laws are the hardware, ethics the operating system, and morality the software.

In human terms, it’s a giant, complicated mess. Thousands of years of human history are a testament to how difficult it is to define ethics, let alone ensure that they’re applied consistently.

What’s the balance between the interplay of law, morality, and ethics? 

There are really important ethical questions we should ask about AI right now, starting with whether or not we want to outsource important decisions to machines. 

Should your car have the capacity to decide whether or not to hit a pedestrian in order to avoid damage to your or your property?

Should your employment status fall under the purview of an AI? How about monitoring your behavior and then determining your insurance premiums or suitability for a mortgage loan?

Should AI be responsible for managing your health and dictating your treatment?

Should we encourage businesses to replace human beings with AI? Is that a good thing for our families, communities, and economy? 

If an AI makes a decision with life-impacting implications of health, finance, or opportunity, who or what is responsible if it’s wrong?

Conversations about bias assume that AI can be invented that will solve such questions for us. It’s a biased, foregone conclusion. A certainty. The ethical choice.

Yet all of these questions involve our human ethics, not those of machines. We can choose to use AI to make decisions and we can choose to base our own decisions on them.

Or not.

Wouldn’t it be great if we first worked on becoming more ethical ourselves? 

It might get us closer to building that perfect machine and make our lives more rewarding and our communities more sustainable along the way. 

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *