OpenAI’s co-founder Sam Altman was fired last week for “being not consistently candid in his communications” with the company’s board. He joined Microsoft about a nanosecond later.
Candid about what? One theory is that Altman wasn’t doing enough to mitigate the threats to humanity posed by AI development. OpenAI is the company that gave the world ChatGPT.
But Altman has been outspoken on that very topic, regularly telling reporters and legislators that AI poses existential risks that are as great as its promised benefits. His punchline has always been that governments need to do more to restrain companies from causing the former, while encouraging them to pursue that latter.
In all candor, this AI pushmi-pullyu all but ensures both outcomes.
Having it both ways.
The pushmi-pullyu is an imaginary llama-like beast that has two heads, one pointing front and the other back. This means it can present its “face” in different directions so it’s never clear whether it’s coming or going. It debuted in the first book about a man who can talk to animals named Dr. Dolittle.
Humm…not being able to determine which direction something is moving from novels about a doctor whose name is do little?
This is a perfect metaphor for the contradiction at the heart of AI development in general and OpenAI’s business model in particular.
AI has never been risk of annihilation OR enjoyment of benefits. It’s BOTH. Pushmi-pullyu.
No government could hope to manage the risks of AI, let alone discover them in the first place. Outsourcing that responsibility to bureaucrats is a thinly-veiled attempt to avoid oversight, not embrace it.
Instead, we should bow down to a theology of progress that preaches that technology is agnostic and innovation should be unfettered by law or regulation.
Trust entrepreneurs and businesses to develop AI. Their intentions are good, even sincere. They’re just not responsible for the outcomes of their actions, which guarantee massive upheaval to our economics, politics, and social fabric long before an errant robot chooses to launch nuclear weapons.
Another OpenAI exec opined this very sentiment when he resigned in sympathy with Altman’s fate (and then went with him to work at Microsoft). It’s actually the slogan on the company’s website:
“I continue to believe in the mission of creating safe AGI that benefits all of humanity.”
The road to hell.
There’s a lot of money to be made as AI changes everything but before it destroys us outright.
OpenAI could be worth $80 billion. Microsoft sees huge opportunities to monetize AI in its Office tools. Companies large and small have already cut staff and see bigger benefits for replacing people with algorithms.
AI is a risk but it’s great. AI is great but it’s a risk. Pushmi-pullyu in action.
But what could Sam Altman possibly have neglected telling his board about this Status Quo?
The potential negatives coming from AI use are probably greater, and more numerous and likely to be felt sooner than Altman or any other tech evangelist is willing to disclose. But the board must already have known that, mostly because it’s a given for everyone else in the know.
Maybe they discovered some secret take-over-the-world project that put their financial enrichment at risk?
The more things change…
It would be great if OpenAI’s board had some great awakening to their responsibility for the world they’re creating. The company is in a great position to not just spout off about how uncontrollably risky its work might be, but assert and promote the ways it will mitigate it (other than insisting that their intentions are good).
It would also be great if Microsoft, as a public company, established new transparent ways to assess and value the impacts its AI development will have on the world, both positive and negative. I could imagine an app that lets its users calculate the impacts of using its AI in much the same way that people can determine the carbon footprints of their actions
We can only hope such things will happen, as the likelihood that anything will change is incomprehensibly low. It’s far more likely that we’ll continue to hear more about the risks and benefits of AI as companies pursue both with unrepentant abandon.
The AI Pushmi-pullyu is going nowhere.
No comment yet, add your voice below!