There Aren’t Two Sides To The AI Debate

We have been led into a dead-end debate between two opinions of AI.

One one side, there are the techno-optimists who believe that advancements in AI will usher in a new era of convenience, safety, and prosperity. The opposing side are Neo-Luddites who think that machines will destroy our livelihoods and then our very existence.

AI will either create a New Eden or usher in the Apocalypse. Magical promise or existential threat.

Only there is no debate between these two extremes. In fact, they don’t exist much beyond the machinations of people who have something to gain from casting the debate in such terms.

AI will create loads of benefits and huge risks, and it will do so by drastically remaking our world.

There is no debate about this fact. But there should be a debate about what we’re going to do about it.

AI is already changing how decisions get made and work gets done. Intelligent machines have been replacing human workers at desks and assembly lines for years. The capacities for LLMs to mimic human awareness and reasoning already challenge our conceptions of our own uniqueness.

And yet our public debate rarely gets past declarations about why these changes are good or bad.

Our public policies focus on doomed attempts to ensure that AIs operate “fairly” in specific instances while the unfairness of its widespread use go untouched.

Management consultants issue reports on the massive economic potential for using AI that send the stock market into rapturous swoons.

Governments establish committees and task bureaucrats with applying a light touch to AI development so as to not stifle innovation.

Academics blather mostly nonsense about AI, their jobs dependent on the very companies that stand to make money from its use.

AI experts pop up periodically to warn of impending doom as they shrug away their responsibility for it (and raise more cash to ensure it).

Where’s the international debate about where we’re going to get all of that electricity that AIs will consume? Where are the national Manhattan Projects devising ways to preclude or ameliorate the massive job displacements that are central to AI’s success? Why isn’t every organized religion encouraging their leaders and flocks to actively reexamine their spirituality?

The debate about AI should include all of these voices and focus on what we will get, and what we will have to pay, as use of the technology gets more common and consistent.

Those impacts are certain. So, there’s no need to love AI or hate it because there’s no debate.

Every day that passes in which we don’t acknowledge this truth surrenders the conversation to parties who see personal gain in casting it as a debate between two irreconcilable extremes.

And that’s the true existential threat.

AI Growing Pains, Or The Shape Of Things To Come?

Two recent events involving AIs hallucinating lead to a terrifying conclusion.

In the first case, a chatbot used by Air Canada promised a refund to a customer in violation of the company’s refund policy. The customer sued, and the company claimed that the AI was “a separate legal entity that is responsible for its own actions.” The court sided with the customer (and the chatbot was promptly fired).

In the second instance, Google’s Gemini AI invented racially diverse images of Nazi soldiers and America’s Founding Fathers. Political warriors claimed “wokeness” and Google said it was a programming error.

Now to the terrifying parts.

One of the main reasons Nvidia and the leading AI developers are raising (and making) so much money is because of the promise of replacing human beings in the workplace. AI will do the work of tens of millions of people, though without the management headaches or expectations for healthcare.

Companies are spending billions in order to reap billions more in productivity and profitability benefits. Holding them responsible for the actions of their bots will significantly slow that transformation, while opening up executives and board members for potential liability exposure

Some wags suggest that excitement about AI adoption is responsible for keeping the stock market healthy. Slowed adoption could hurt that overall performance, not to mention add costs for liability protections to the fevered dreams of individual businesses’ plans to install bots where people sit.

What’s even scarier is the possibility that AIs might never become more reliable or fair than people. This will render moot many of the promised social benefits of AI and put at risk the scientific ones, too.

Do you want to use a new drug that was “tested” by an AI that might have fudged the development process sue to some “error” in its code? How about trusting your autonomous car to make the safest decisions at every instant?

Just think of all the school kids who will write papers quoting historical figures who never said (or looked) the way AI cites them. Oh, wait, AIs will wrote those papers, too. Never mind.

Still, the worry isn’t AI put to nefarious uses as much as its inability to perform truthfully and reliably for otherwise innocent, everyday uses.

You can just imagine AI evangelists claiming that there are no problems here, just hiccups. Every observed imperfection yields provisions to resolve it along with precluding dozens that haven’t happened yet. AI will always get better so there’s nothing to worry about.

What I find most terrifying is that all of us have been enlisted as subjects in this grand experiment. We aren’t informed about what’s happening, don’t possess the knowledge to understand or assess it, and haven’t an ounce of agency to do something about it.

The development process will never end. The experiment will always continue.

AI growing pains are the shape of things to come.