An AI that can do everything a human being can do only better, faster, and without need of rest or healthcare is going to possess “God-like” qualities, according to one AI investor.
Such an AGI, which stands for “artificial general intelligence,” should scare the hell out of us. The guy said:
“God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.”
I think our problems are far more immediate.
AI already makes loads of decisions for us, many of them simple and pedestrian. Thermostats. Anti-lock brakes. It also does more complicated things, like all but fly airplanes. It matches consumers with stuff they might want to buy, and it entertains us with conversations that mimic those that we might have with one another.
It’s also putting loads of people out of work, and changing work for those of us who still have jobs. It impacts everyone in ways far beyond making shopping more convenient, raising questions about our sense of agency, freedom, and even our very sense of purpose and uniqueness.
This reality scares the hell out of me, especially since we don’t ask those questions.
Instead, we ruminate about AI superintelligence and the destruction of the world. These conversations are more like discussing science fiction than political, economic, or social reality.
No, it’s like discussing religion.
Underlying the development of AI is the belief that it can be coded to act more responsibly and reliably than people. No task or situation is too complex to be deconstructed into a series of “if this happens, do that” commands, assuming the AI possesses enough data to reference or extrapolate every possible decision it might make.
There is no chance in the universe. No spontaneity. No circumstance that can’t be expressed as a system, its variables mapped and outcomes predicted. Surprises are simply the illusion of novelty.
So, taking full driving control of a car or managing a country’s nuclear missile arsenal are no different than adjusting the heat in your home, they just require more data and processing capacity.
No otherwise “moral” decision is anything more than the balancing of interests within the guidelines of possible outcomes. Skip the imperfections that come along with human decision making and you don’t just get better performance, but greater morality.
All it takes is an AI that has enough data to think and act like a human only better, faster, and more reliably. It’s just a matter of time before we get AGI.
Technologists rant about it with religious fervor, as if it’s a foregone conclusion.
It isn’t. Expectations for the arrival of a completely aware AI are product of faith, not fact.
We’ve heard similar stories about miraculous appearances over the centuries.
Today’s AI applications are often shockingly imperfect, mostly because of the imperfections of the humans that code them. This is OK because they do tasks that don’t require perfection or 100% reliability.
And, when they replace human job functions, the AI just have to be measurably less imperfect than people.
This reality suggests that AI performance can be improved, but does so in a vacuum removed from understanding what human thought or consciousness mean.
We have no idea how our sense of “self” operates. Debates about whether emotions are identified or simply experienced remain unresolved, as does their role in decision making. We don’t know how our brains store the rhythm of songs or how our minds find connections between odors and memories.
There is zero consensus on answering these questions.
Looking further afield, there are emerging debates about intelligence in other animals, and even arguments for purpose that’s innate in intimate objects and forces of nature.
We can describe the what of intelligence but often not the why or how.
Technologists have no such problem because they’ve simplified their challenge with the presumption that the human brain and mind are one integrated computer…a machine that can be deciphered, given enough data and time.
DNA? The same as computer code. Intelligence is just a set of “if this happens, do that” commands.
It’s an analogy, not a description. A declaration from the faithful.
So, I’m not scared about the imminent arrival of AGI.
I’m worried that we already give AI God-like credit it doesn’t deserve.
Technology evangelists are already transforming our world with it.