Even if AI could do anything we can do as well or better than we can do it, it can’t die.

It’s a morbid thought, but the idea that our thoughts (and subsequent actions) are informed and tempered by our awareness of our mortality might help us to talk more cogently about what “better” means.

All of us live in our own End Times. Our days are numbered whether or not the rest of the world follows suit. Our ability and willingness to recognize this fact is reflected in everything we do.

Those reflections vary widely. Some people pretend to ignore it. Others hold beliefs to help cushion their existential dread. Some folks get inspired by it, like Steve Jobs, who said that he looked in the mirror each day and asked himself “if today were the last day of my life, would I want to do what I do today?”

But it’s unavoidable. Each day we get closer to dying, tracking our ages almost as if we’re keeping score. We’re physically built not only to fall apart but create new life.

By dying one day, we are given reason to ponder our purpose for living the rest of them.

AI has no such awareness. It can’t exist in the same way that we do, despite any wizardry of coding that might simulate it. It executes commands. AI can’t fear being turned off because it’s not conscious of have been turned on.

So, if and when AIs can make decisions “as well or better than a human being,” they won’t make them as human beings. The smartest AIs will have no personal beliefs or sense of responsibility, no underlying convictions or visceral awareness of the implications of their actions, no outlier knowledge or experience that might otherwise not be included in the calculations.

They won’t have to live with the consequences of their actions because they’re not alive.

AIs with artificial general intelligence, or “AGI,” may one day fully mimic human capabilities, and the boffins behind the tech are racing to realize it. But even if they program it to simulate fear of getting unplugged, it won’t be the same as possessing a personal awareness of death.

Human beings make incredibly stupid and outright cruel decisions for the worst possible reasons. Our institutions are imperfect and our personal interactions fail as often as they succeed because of it.

But we also do wonderful things for surprising yet “right” reasons. We make decisions that improve our lives and those around us even if the personal, economic, or social incentives suggest we should do otherwise.

We add emotions and feeling to our interactions that may or may not need to be included, but which make those moments more real, more meaningful. More alive.

Because it matters to us. How we spend our days matters to us.

If today were the last day of my life, I’m not sure I’d want an AI deciding what I’m going to do with it.

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *