Elon Musk is reportedly going to develop an advanced AI only a month after calling for a six-month pause in the development of such systems.
The manifesto he signed declared that AI “…pose[s] profound risks to society and humanity…” and that the digital minds are so smart that “…no one, not even their creators, can understand, predict, or reliably control [them].”
He’s not alone. Engineers from Microsoft, Google, Meta, Amazon, and a slew of startups working on novel AI signed the manifesto, too.
Only none of them can stop themselves from working on it.
The developers went back to their workshops. The academicians who signed the letter returned to their classrooms and conferences. The reporters covering the phenomena got cracking on producing their breathlessly enthusiastic coverage about robot consciousness or some other blather.
Musk has been at it for years, evidenced by the market-leading if not sometimes imperfect automated driving controls inside his Teslas. He co-founded OpenAI, the company driving the conversation about generative AI.
There’s a business case for advocating that your competitors stop competing with you.
Patents have the dual-effect of protecting intellectual capital while locking out adjacent innovation. First-movers without legal protection often try to gain market advantage by publicly advocating for regulations to limit others from following the same path.
It’s also reasonable that nobody wants to quit work that will continue without them. People need to make a living, AI is tremendously interesting, and perhaps being a participant provides more opportunity to keep the development on the straight and narrow than does griping from the sidelines.
But the punchline remains the same: AI developers are out of control and the industry is writhing with an urgency that rewards accomplishment over perspective, benefits over side-effects, and short-term economic success in lieu of longer-term social sustainability.
It’s the Wild West of unfettered capitalism in which nobody is going to give up their guns, certainly not first.
Asking for a pause in the race for AI dominance was never going to work, however sincere its intentions.
And what are we asking them to do anyway?
Stopping AI from destroying the planet is a non-starter, mostly because we human beings have real trouble imagining such immense events. They’re just too far-off, too complicated, and too easily confused with facts and fiction.
The informed and well-intentioned among us will make mostly symbolic gestures while the cynical are encouraged to confound them. The rest of us will simply be disinterested and paralyzed.
And those AI developers can make too much money doing too cool stuff to stop themselves.
More importantly, the existential threat of AI is right here, right now.
Al is already changing how and why we work and live. Right now, new functions are being introduced. More data is getting collected. Systems are iterated and revised.
Every minute of every hour of every day, AI keeps getting smarter, more conversational, and put to work observing, assessing, and perhaps managing our existence.
It’s a feedback loop. A well-oiled machine that’s running full-speed ahead.
Benevolent AI will destroy the world long before an evil AI gets around to it.
So, instead of trying to forestall some threat tomorrow, we should explore the implications of its uses today. Spend less time trying to decipher the complexities of its development shine more light on its economic, political, social, and psychological effects.
I think that would start by taking the conversation out of academia and corporate boardrooms and develop the research and public places where our opinions can get informed and expressed. Create ways to include voices in the conversation that aren’t sponsored or otherwise biased toward AI acquiescence.
Unions. Religious communities. Independent thinkers.
Instead of relying on regulations that nip at the edges of AI development or risk becoming outdated the moment they’re passed, we could innovate ways to moderate its rollout by our choices as consumers, investors, and even entire communities.
Could we demand the products or services run by machines instead of people be overtly identified so we can decide whether or not we want to buy them? Could companies have to disclose their plans for automation and the effects on people (think Big Oil including the impact of carbon on the planet)?
Could cities put to the vote corporate plans to automate cars or deliveries? Force public debate on initiatives to use AI to make our lives “easier?”
We don’t need to wait for AI developers to learn how to control themselves. It won’t happen, no matter how many letters we write to them.
We can instead start talking about what’s happening and whether or not we like it, and take action.
We can and should control ourselves.
No comment yet, add your voice below!