Christopher Nolan, director of the new movie Oppenheimer, gave a lengthy interview to Variety in which he likened today’s AI developers to the scientists who created the atomic bomb:

“When I talk to the leading researchers in the field of AI right now, for example, they literally refer to this — right now — as their Oppenheimer moment. They’re looking to history to say, ‘What are the responsibilities for scientists developing new technologies that may have unintended consequences?’”

If they’re looking to history, we’re truly doomed, because most of the bad things that have happened in the world were intentional.

From government policies to individual behavior, people set out to do things that they think are reasonable if not necessary. The same goes for scientists; they identify a desired outcome and then pursue it.

Lots can go wrong.

They may fail to see the full implications of their efforts. The scope of a project might be flawed (or an individual can be simply wrong); impacts can fall outside the scope of what they’ve set out to do, whether by superior authorities or their own convictions. They can underestimate the effects of their work, and the connected and iterative/recursive reality of experience can mean they contribute to outcomes that are vastly distant from them in time, space, and imagination.

But labeling the moral question as one of “unintended consequences” makes the issue to vague, too amorphous to grasp and dissect. By being unanswerable, it absolves actors of culpability.

How about taking responsibility for intended consequences?

It’s a tough one because it’s so easy (and therefore common) for people to claim that they were “just following orders” or simply ignorant. AI pioneer Geoffrey Hinton has been blathering a version of such excuses, claiming that AI will likely blow up the world but that if he hadn’t helped make it, someone else would have invented the same thing.

There’s no evidence that Oppenheimer was overly troubled by his culpability in building the most destructive weapon in human history. In fact, he argued that science needed to be less encumbered by political influences, which seems to me that he wanted the voices of people left out of the scope of his technical work.

A deadly weapon that could be used by any actor for any reason. What could possibly go wrong?

Maybe Oppenheimer saw very clearly the consequences of his actions and fully intended to create a weapon that would end WWII as well as prevent any future global conflicts (the logic of mutual assured destruction). Maybe he judged the deaths of hundreds of thousands of Japanese civilians and the specter of a non-state actor detonating a weapon as “acceptable costs” for attaining some greater good.

God help us if AI developers think this way.

In fact, some of them probably already do. The philosophy supporting such thinking, called Effective Altruism. It presumes that innovators, usually relying on some technology and having some connection or affinity with Silicon Valley, can assess the trade-offs between costs and benefits and that maximizing the latter excuses causing the former.

A ride-sharing app puts people out of work, increases pollution and traffic congestion, and increases the likelihood that riders will be abused, but the benefits of easier transportation merit the costs.

An AI makes better medical diagnoses but is used to limit treatment and insurance reimbursement to patients who don’t meet some threshold of survivability. The trade-off is considered acceptable.

You get the drift. It’s a God-complex sort of thing. And it makes intended consequences far scarier than unintended ones, however profitable they might be for developers and implementers (which is why the philosophy is so fashionable).

So, what does “responsibility” ultimately mean?

My bet is that the definition falls somewhere between being ignorant or disinterested in the consequences of intended, purposeful actions, and deciding that the universe has nominated you to make decisions on behalf of all mankind.

Oppenheimer failed to find that middle ground. Revisiting his “moment” will do today’s AI developers no good.

Recommended Posts

No comment yet, add your voice below!


Add a Comment