The folks responsible for inventing AI that could destroy the world have once again called for other folks to stop them from doing it. 

The statement, signed by many of the big names in AI research, development, and deployment, was issued by the Center for AI Safety and reads:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” 

It’s just the latest in a series of manifestos and media interviews in which many of the same people have riffed on the same theme.

The declarations are useless because they’re hypocritical, delusional, or disingenuous. Or all three.

They’re hypocritical because nowhere does a first person pronoun appear in any of their warnings. There’s no “I” or “we” behind the creation of a potential AI monster. It just “is,” as if AI emerged spontaneously from some digital goo somewhere. 

How can their declarations have any legitimacy if they don’t admit their guilt for creating and perpetuating the threat? Even if some of them are sincere in a hope to rehabilitate their souls, doesn’t the absolution of sin first require that the sinner admit to it? 

Acknowledging the existence of risk in the world is kind of like noting that water is wet.

And who’s supposed to take responsibility for mitigating that risk of extinction? Oh yeah, somebody else. 

It’s like the hoods in West Side Story relying on Officer Krupke to keep them from holding their next dance competition

And they’re delusional if they think government, markets, or any other parties not directly responsible for AI development or deployment are up to the task.

Government can never stay current on what’s happening in AI innovation, nor can it hope to understand its future implications before they’ve become apparent. Being non-experts in a field that is flush with technobabble doesn’t help. Jeez, those folks can’t even agree on the time of day.

Markets are skewed to valuing benefits, not risks, and VCs are biased to promoting glittery stories of promise instead of gnarly warnings of doom. The incentive for businesses right now is to blame AI for firing people and create it for increasing their profits.

There’s no number attached to the cost of AI risk, so who cares? We all live with the certainty that each of us will become extinct one day. That risk doesn’t get factored into quarterly earnings reports, either.

Governments have no visibility into what’s going on in the heads of AI innovators or already lurking in the wild, so markets can no reason to value those unseen risks.

Officer Krupke isn’t in the playground or parking lot listening to their plans.

Experts are disingenuous because they’re the ones best positioned to mitigate the risk they’ve created, and they know it.

They could start by getting far more specific about what it is, exactly. I get the Terminator or War Games AI pushing the nuclear button. But what else? Are they worried about the AI in automated cars deciding to crash instead of drive, or an AI developing a new drug slipping a secret poison into the mix?

Does the AI have to kill all of us or only a portion of the population to qualify as an “extinction” event? What about killing house cats or every tree in the Amazon?

Is blowing up the employment opportunities for millions of people qualify as an extinction of our ability to earn a living? What about making every future generation so dependent on AI-assisted living that they are incapable of living without it (and thereby have to pay for the privilege)?

The thing is, the AI they’re creating right now could do all of the above, just as their employers and funders stand to make oodles of money on the technology before it does a bad thing, either on purpose or by mistake.

Only they don’t talk about the specifics; instead, they issues their scary statements and everybody keeps building scary tech and making (or hoping to make) scary money.

They could tell us that they take responsibility for their actions and are voluntarily doing so-and-so to mitigate them, only instead they have only one thing to say to us:

Krupke you!”

Recommended Posts

No comment yet, add your voice below!


Add a Comment