I was briefly encouraged last week by news that the guy who’d quit OpenAI because of its lapses in ethics had raised a billion dollars for his new company named Safe Superintelligence (“SSI”).
In late 2023, news broke that Ilya Sutskever, one of OpenAI’s co-founders and a board member, had led the ouster of CEO Sam Altman because the guy was moving too fast in AI development and too slowly in communicating its risks.
The company’s investors, partners, and fan base were shocked…running along a cliff edge with your eyes closed was central to AI innovation…and within weeks, Sutskever and his fellow mutineers were out at OpenAI and the company had doubled down on its pursuit of secretive breakthroughs in AI that may or may not annihilate humanity.
So, I was encouraged when Sutskever’s SII announced that it had raised another $1 billion and was valued at much more than it had been when I’d last checked.
I immediately concocted an elaborate fantasy for what was happening…
…a principled AI innovator wants to build AI that has morality and a respect for human ethics built into (and inseparable from) its design and function. Not only will this AI be “safe” because it will be incapable of doing harm but it will operate as an advocate and protector for doing good.
SSI is building an AI that will stand with humanity, like a real-life Optimus Prime, policing the AI Decepticons intent on stealing our privacy so they can control our thoughts and behaviors, and otherwise exploit humanity as nothing more than fodder for the machinations of businesses and governments.
There was no reason to fret over OpenAI or the other contenders for name sponsorship of the Apocalypse, as SSI would protect us.
Then I blinked and the fantasy was gone.
SSI’s single page website explains, well, nothing, though it repeatedly references its monomanical focus on developing “safe superintelligence” that seems dependent on not worrying about “short term commercial pressures.”
In other words, it’s not going to unleash its AI products on the world until it’s reasonable sure that they’ll do exactly what their owners want them to do. “Safe” has nothing to do with what its AI might do to change every conceivable aspect of our public lives or private selves, its effects no more moral or ethical than its competitors’ offerings.
It’s about selling a better product.
In this sense, safe superintelligence is an oxymoron like loyal opponent, jumbo shrimp or, my favorite, disposable income.
SSI isn’t promising it won’t destroy our world, it’ll just do it more responsibly, my brief fantasy a nice pause from the relentless march toward that end.