If you want to read a summary of what’s wrong with the conversation about AI risk, read Google’s Policy Agenda For Responsible Progress In Artificial Intelligence.

You’d think they weren’t responsible for it. And that’s the point.

The white paper, which came out last month and is currently being hyped in ads in major media like The Economist, distills the conversation it wants to have to three topics: Opportunity, Responsibility, and Security.

The spin starts with an introduction that talks about AI as if it has will of its own, promising to “significantly improve the lives of billions of people” and being so important that any pause in its development risks “falling behind those who embrace its potential.”

It uses a generic, muddled “we” to reference who’ll enjoy those benefits and who needs to address the commensurate risks. So, right up front, Google is taking ownership only for writing the white paper.

Yet the company is one of the primary drivers of generative AI development and use. It’s putting AI into just about everything it does. Google sits on an incomprehensible amount of data that it can use to train its AI, and collects more of it every time someone uses one of its products.

We aren’t driving AI. They are.

The tone of the Introduction flows neatly into the first point of its manifesto, which is all about “unlocking opportunity” by maximizing AI’s economic promise.

Of course. That means Google’s opportunity to make money.

What does that opportunity look like for the rest of us? Sure, there are the obligatory references to solving “big problems” and doing sustainable things, but the overwhelming benefit is business productivity. AI will increase business productivity “despite growing demographic challenges” (i.e. replace human workers with robots), thereby allowing the remaining people to “focus on non-routine and more rewarding elements of their jobs.”

Those jobs will be monitoring the AI before it takes over fully. Otherwise, it’ll free millions of people to seek work in areas that super-smart machines can’t function, or are too valuable for robot owners to risk their investments (think low-skill, high-risk environments). Or maybe it’ll unleash a huge mob of newly-minted human poets.

Until AI gets expert at that, too.

Very quickly, the section switches to tag “policymakers” with the responsibility to “minimize workforce disruptions” caused by the inevitably transition to a robot-run future. Governments should prepare to manage all of the economic and social unrest that Google’s AI products and services will help create.

We need to be prepared for what Google plans to do.

This thinking carries through to the second section, since not only will there need to be new laws but also more basic research and innovation on how to apply them. The language stays clear of noting what the issues are, other than that it’s important that everyone can trust AI to do what Google and other proponents of the technology promise.

It makes a quick reference to “watermarking” content that’s created by AI, which is great obfuscation since almost all digital content already comes courtesy of AI (pop singers voices are corrected, movie monsters are created, and sports camera angles are controlled, just to name a few examples). 

Then, the section goes beyond outsourcing responsibility to government and suggests that “new organizations and institutions” would let “leading companies” come together.

In other words, we need to do a lot to respond to the problems and risks that Google will help create. 

The last section is about security, which is an extension of the prior section to include the risks of cyberattacks and other nefarious applications of AI.

The ugly fact is that AI, perhaps combined with nascent quantum computing tech, will be able to guess every password imaginable. There are standards recommended for protecting personal apps, websites, and corporate systems from such attacks, but the interative nature of AI upskilling means we are more at risk for hacks than we mere mortals can imagine.

Add to that the risks of AI gaming systems that we might never see at all, like simulating employees in company records and collecting their pay and benefits, or siphoning small amounts of money from stock markets by outguessing the next nanosecond of trading.

And then there’s a Big Kahuna of existential AI risk.

An AI could decide to kill all of us, either as the answer to some question or a byproduct of the answer to another one. Given the right amount of autonomy and authority, it wouldn’t need a bad actor human to trigger the event.

What are we supposed to do about all of those risks? Google says “security is a team sport” and that all of us have to cooperate and work together to help stave off the likelihood of our impoverishment or destruction.

I encourage you to read the white paper. It’s shocking in its bold assertion of what’s going on. Google sees AI as inevitable and transformational, and it intends to pursue it with all due speed. There’s tons of money to be made.

The problem is that those profits will come with massive economic and social disruption, and that’s just the stuff we can clearly predict. Nobody knows how the robot takeover will impact us as individuals, changing how we think about ourselves and one another. 

And then there’s the problem of a mass extinction event. 

Thanks to Google, we’ve got a lot to worry about.

Recommended Posts

No comment yet, add your voice below!


Add a Comment