Move Over AI, We’re Robots Too?

Earlier this year, brain researchers found the spots in the area that governs physical movement intersect with networks for thinking.

Mind-body duality questions solved for good. The mind is what the brain does. It controls our bodies with commands like a CPU controls robot arms and legs.

Not.

It’s a recurring delusion in neuroscience these days, probably because it’s so powerful. We can only study and understand what we can perceive and test. When it comes to human beings and our minds, all we have to look at is what’s physically contained in our skulls and the filaments of its connections to the rest of our bodies.

Mind is therefore a product of brain. It’s a self-evident, a priori truth. Machine references are more than analogies, they’re descriptive. Genetic code is computer code. 

We’re all machines. It means that AI minds will one day equal if not exceed ours because those machines can think faster and have greater memories.

And, if there’s anything we can’t explain about our minds, such as our ability to have a sense of “self” or a presence in space and time that we feel as much as perceive, well, that’s just tech we can’t yet explain.

One of the study’s authors explained that the discovery “provides additional neuroanatomical explanation for why ‘the body’ and ‘the mind’ aren’t separate or separable.”

And there’s the rub. There is no explanation in the study, or in any scientific research. 

More broadly, science provides descriptions. It reports the ways things work. It’s repeatable and reliable, and it has given us cars on our roads, electricity in our homes, and meds in our bloodstreams (among a zillion other aspects of modern life). 

It is undeniably accurate and real. But it doesn’t tell us how things work, let alone why.

Just spend a few minutes on the Wiki page for electricity and you’ll see what I mean. Electricity has charge, force, current, but these are all descriptive qualities, not explanations. We don’t know why electrons move from one spot to the next, we just know that they do

Same goes for all of the building blocks of reality. Atoms are attracted to one another by various forces, but the WHY of that attraction is a mystery. Gravity is mutual attraction between things that possess mass or energy, yet the best description of how it works — it’s the curvature of spacetime — still lacks a description of how that description works, let alone an explanation.

Science can describe life but can’t explain it. It can identify individual components of our genetic code and attach them to propensities for specific outcomes, but has no explanation for how or why. Locations for mental functions have been mapped in our brains, but there’s no such thing as a biological computer program that operates them. 

We don’t have a clue how the squishy blob in our heads records music or can count time in hours and even minutes. We can only report that it does.

The scientific delusion that conflates description with explanation has at least two implications for AI:

First, it overstates the potential for AI consciousness. Since self-awareness popped out of our biological machine minds, it’s only a matter of time before the same thing happens to AI. We don’t know how, or when, or why, but simply that it will. 

When researchers can’t describe how a generative AI made a decision, they claim it’s proof that such evolution is already taking place.

Sounds like wishful thinking to me.

Second, it understates the complexity of human consciousness. What if there are material limits to what brain functions science can describe? Already, we’re told we live in a Universe that is infinite, which defies explanation, and that the matter of which we’re made isn’t tangibly real as much as the uncertain possibility of existence, which doesn’t even fully work as a description.

So, while researchers might be able to provider ever-finer pinpoints of the physical whats for every mental quality we associate with being alive, they could still leave us wanting for an explanation for how or why.

I think this conundrum reveals the mechanistic model of the human brain/mind as less model than metaphor. I’m not suggesting that the limits of scientific description mean we need to invent magical explanations instead.

Rather, I wonder if some things are simply unknowable, and that a little humility would give us a better shot at coming to terms with how and why things are the way they are. That includes what we expect from AI.

If I’m right, the hope that continued scientific research will prove every question is answerable is probably a delusion.