Is hoping for consciousness to emerge from an ever-more complicated AI like waiting for a monkey to create a verbatim copy of Hamlet?
I think it might be, though the toffs building and profiting from AI believe otherwise.
Fei-Fei Li, an AI pioneer, recently wrote in The Economist that she believes that teaching AIs to recognize things and the contexts in which they’re found – to “see,” quite literally – is not only the next step that will allow machines to reach statistically reliable conclusions, but that those AIs will
…have the spatial intelligence of humans…be able to model the world, reason about things and places, and interact in both time and 3D space.
Such “large world models” are based on object recognition, which means giving AIs examples of the practically infinite ways, say, a certain chair might appear in different environments, distances, angles, lighting, and other variables, and then code ways for them to see similarities among differently constructed chairs.
From all that data, they’ll somehow grasp form, or the Platonic ideal of chair-ness, because they won’t just recognize but understand it…and not rely on word models to pretend that they do. Understanding suggests awareness of presence.
It’s a really cool idea, and there’s scientific evidence outside of the AI cheering section to support it.
A recent story in Scientific American explained that bioscience researchers are all but certain that human beings don’t need language to think. It’s obvious that animals don’t need words to assess situations and accomplish tasks, and experiments have revealed that the human brain regions associated with thought and language processing are not only different but don’t have reliably obvious intersections.
So, Descartes and Chomsky got it backwards: I am, therefore I think is more like it.
Or maybe not.
What’s the code in which thought is executed, let alone where is the presence of an awareness that is aware of its thinking located, or how does it function?
Nobody knows.
I have long been fascinated by the human brain’s capacity to capture, correlate, access, and generate memories and the control functions for our bodies’ non-autonomic actions. How does it store pictures or songs? The fact that I perceive myself somehow as its operator has prompted thousands of years of religion and myth in hopes of explaining it.
Our present-day theologies of brains as computers provides an alternative and comforting way of thinking about the problem but provides little in the way of an explanation.
If human language is like computer code, then what’s the medium for thought in either machine? Is spatial intelligence the same thing as recognition and awareness, or is the former dependent on language as the means of communications (both internally, so it can be used to reach higher-order conclusions, as well as externally, so that information can be transmitted to others)?
And, if that mysterious intersection of thought and language is the algorithm for intelligence, is it reasonable to expect that it will somehow emerge from processing an unknown critical threshold of images?
Or is it about as likely as a monkey randomly replicating every word of a Shakespearean play?
Ms. Li says in her article that belief in that emergence of intelligence is already yielding results in computer labs and that we humans will be the beneficiaries of that evolution.
For an essay that appeared in The Economist’s annual “The World Ahead” issue, shouldn’t there have been an essay that pointed out the possible inanity of waiting for her techno-optimist paean to come true?
More to the point, how about an essay questioning why anybody would think that a monkey replicating Shakespeare was a good idea.