Not so hidden in news about Elon Musk’s DOGE romp through US government offices is his intention to build an AI chatbot that will replace human bureaucrats.
Well, it is kinda hidden, since most of the news stories are guarded by online paywalls, but from what I can gather from above-the-fold snippets is that one goal is to use the chatbot, called “GSAi,” to analyze spending at the General Services Administration.
The premise is that much of what the government does isn’t just corrupt but inept.
Subsequent iterations of the bot will allow it to analyze and suggest updates to its code, thereby always keeping it one step ahead of circumstantial (or regulatory?) demands.
Another version will replace human employees, since human work is another presumptive inefficiency of any institution’s operation.
It’s a horribly simplistic solution applied to a terribly complex challenge.
At its core, it assumes that people are the problem. People have bad intentions. They make bad decisions. Their actions yield bad outcomes. Their badness resists examination and change.
People are just bad, Bad, BAD!
An AI will bring needed objectivity, efficiency, and reliability to study any problem or effect any solution. A bot will owe no allegiance to any interest group, voter constituency, or any outright under-the-table incentive.
The result will be good.
Such thinking ignores, either out of ignorance or, more likely, purposeful disregard, the reality that there’s no such thing as an AI that is wholly objective, or that government’s operational complexities — the result of agreements, compromises, and concessions between legitimate and often competing interests — can or should be replaced by an AI even if it were impartial.
That GSAi will be coded with specific intentionality and the scope of its perception of anything brought before it will be guided (and limited) by the specifics of its training data.
As AIs get more capable and even flexible, they tend behave more like human beings.
And, as for “inefficiency,” one person’s pork-barrel is another person’s weekly paycheck. An earmark might seem unnecessary or silly to someone but vitally important to someone else.
That’s not to say that there aren’t woeful amounts of outright inefficiency in government operations, just like there are in any business or community group. But the challenge of separating them from reasonable compromises — dare I say “good inefficiencies” — isn’t the result of people being stupid or evil.
It’s because it’s a complex challenge, which leads me to think that it can’t be “solved” with a chatbot saying “yes” or “no.”
And pushing people out of their jobs entirely won’t necessarily improve anything. Just imagine interacting with a robot bureaucrat on the phone or via email, only it has been coded to better know your weaknesses and thereby drive you even crazier than any human staffer could hope to do. Or bots deciding services and budgets based on, well, whatever their coders thought should be considered.
Again, we know why government’s make bad decisions and we have the capacity to change them if we choose (we, as human beings, created them). We just haven’t done it, and electing an AI to do it for us seems doomed and probably cruel.
Oh, wait a minute. We didn’t vote for AI to run things.
Now it’s too late to say no.