As AIs become more human-like, won’t that include sharing our imperfections?

There’s lots to talk about when it comes to the mad race to create an AI that’s aware of itself and its surroundings like we are, and thereby possesses the capability to better learn on its own (the goal is to achieve “AGI,” which standards for artificial general intelligence).

One of the first questions is who the hell is asking for it, other than the people who love the academic challenge or potential financial payoffs. Those rewards will go the a very, very select few, yet the risks of their endeavors will be borne by all of us.

But let’s contemplate an AI that is more human-like.

Will we better understand its operations? No, it’ll be harder. There is already confusion over how LLMs reach some of their conclusions. Expand the available data and a machine’s ability to process it, and that murkiness gets more opaque.

Will we better predict its likely decisions? No, they’ll be more variable. Data science relies on correlation to suggest causality, and probability to indicate certainty. Add a layer of judgement on top of that processing and you get more potential outcomes.

Will we better rely on its counsel? No, we’ll take greater risks. AGI will be sold to us as a constant companion ready to advise on every decision we make. It’ll speak in a comfortingly familiar voice and seem aware of our intentions and not just our stated needs.

Yet we will have no better understanding of how it works or reaches its conclusions, nor what might influence those outcomes or profit from them. Isn’t this the problem we already have trusting or relying on one another?

The whole idea of modeling an AI on our understanding of what “human-like” means is flawed.

We don’t understand how human consciousness arises or where it resides. We can see the biochemical markers of its agency (and use that framework to design computer neural networks), but we don’t even know the order of events of its operation.

Do we feel pain or label it? How does the brain store images or music, and how do we replay it? No two people experience the same thing in the exact same way and then value or apply that knowledge in the same way going forward.

Mind must emerge from brain, right? There’s loads of philosophical blather that states this perspective as if it were a certainty (Daniel Dennett was perhaps the most articulate advocate of this worldview), but it often reads like the convoluted arguments used to support a religious belief, not state a scientific fact.

We are machines because that’s all we have to work with, so the answer isn’t unknowable, it’s just unknown for now, or so the logic goes.

Even if that’s true, and I happen to know (i.e. believe) that it’s not, we human beings are crappy machines.

Situational awareness doesn’t make us better at making decisions, it just makes them more complicated and nuanced. A sense of self adds personal bias, interests, and self-preservation to every decision. Sometimes the best communicators among us are the worst thinkers.

The one thing we’re good at is doing lots of things passably well, and more than a few things terribly badly.

So, instead of building an AI that does more things less well, how about drill down and make AIs that do fewer things, only better?

That seems to be Apple’s approach, as evidenced by its recent announcements.

We don’t need human-like AI to help us analyze and process data. We need AI that is more computer-like, operating with data, algorithms, and guidelines that are minimized to make them more reliable and knowable. Better machines, not less-bad versions of ourselves.

Go ahead and make it better at receiving imperfectly communicated queries and replying in perfectly human-sounding responses, but otherwise do everything to avoid enabling it to think like we do.

Building AGI is an engineer’s wet dream masquerading as a vision for the rest of us.

Do we really want it?

Recommended Posts

No comment yet, add your voice below!


Add a Comment

Your email address will not be published. Required fields are marked *