The world’s biggest tech companies are racing to creating artificial general intelligence, or “AGI,” even though nobody can agree on what it means, exactly.
Well, yes they can. It means huge profits along the way, irrespective of where, when, or how the competition ends.
It would be a funny if it weren’t so terribly frightening.
The tech industry has done this to us before, most recently with its propaganda about “the metaverse” and, immediately prior, a far more robust delivery of blather about social media.
In each instance, the ultimate destinations of the development work were vague and ill-defined, and instead described in fawning hopeful language. The metaverse was going to “…radically transform the digital — and global — economy,” according to the sycophants at McKinsey. “It’s about community building, conversations and interactions,” said a tech promoter in an HBR article.
Social media promised much of the same or more, like giving “everyone free access to the information, knowledge and resources they needed to experience intellectual freedom, social and economic opportunity, better health, job mobility, and meaningful social connections,” according to author Sinan Aral.
It would be a “digital town square,” according to the rich guys who planned to own and monetize them.
But what IS the metaverse? What does it look like and how would it operate? Nobody can say. There are lots of bits and pieces to it, but since it doesn’ exist yet there is no way to describe it.
Demanding such details on the vision for it evidences a lack of, well, vision.
Social media was the same. Was it a list of comments commenting on comments in a long, linear string? A live chat? A screen filled with information quietly curated so that it catered to an individual’s proclivities?
Granted, it let people send stuff to other people they knew, but the far greater use was (and is) to connect people to influencers and other corporate interests intent on selling things to them.
But if you didn’t understand the underlying technology, you couldn’t question it. Your suspicions were the result of your ignorance.
So, what’s the problem? Isn’t innovation and progress all about setting big, vague goals that are beyond our ability to define them precisely?
A charitable answer is that lack of detail in the scope or goals of an innovation project raises the likelihood of unintended negative consequences (developers literally don’t necessarily know how to discern promising outcomes from those that are potentially dangerous). A vague goal also means that the project will culminate in a deliverable that nobody expected or necessarily asked for.
If we don’t challenge ourselves to be clear on what we’re getting, we may be unpleasantly surprised when we get it.
A less than charitable answer is because the above-mentioned risks are purposeful.
There are many tech bros who subscribe to an inane idea called “Effective Altruism,” which tell them they are best positioned to decide what the benefits and acceptable costs are to their work, as long as their goals are to improve the world (as they see it). Combine this with a desire to make oodles of money and you get the picture:
The vision they see is of self-aggrandizement and immense wealth. Those outcomes are very, very precise and clear.
It’s exactly what they’re seeing with AGI.
Meta’s Mark Zuckerberg was recently asked to define AGI and said:
[AGI can’t] “…be put in a one-sentence, pithy definition. You can quibble about if general intelligence is akin to human level intelligence, or is it like human-plus, or is it some far-future super intelligence.”
He also said that its pursuit was now his company’s goal. OpenAI, Google, Microsoft, and hundreds of startups are in the same race.
Meta’s head flack Nick Clegg reaffirmed at Davos last week that none of them know what they’re talking about:
“Ask data scientists for a definition for AGI and you get a different definition from each single one. There isn’t even consensus on what AGI precisely means.”
An AI that thinks and makes decisions like a human being is a fascinating proposition that should be extensively explored and debated, its merits and drawbacks defined and then adapted and changed as development warranted. What would be its strengths and limitations? How would economies and societies accommodate it?
Would AGI have legal rights and responsibilities? What (or who) would be liable when they malfunctioned or, even more intriguingly, chose to do something wrong?
Having such conversations would help us answer the ultimate question: Do we want it in our lives?
But we’re getting none of that; rather, the tech industry is doing the same thing it did with the metaverse and social media. Its gibberish is announcing its intentions to make tons of money as it pursues cool projects.
They’ll get what they want. The rest of us will have to learn to live with it.