Government For AI, By AI, And Answerable To AI

Not so hidden in news about Elon Musk’s DOGE romp through US government offices is his intention to build an AI chatbot that will replace human bureaucrats.

Well, it is kinda hidden, since most of the news stories are guarded by online paywalls, but from what I can gather from above-the-fold snippets is that one goal is to use the chatbot, called “GSAi,” to analyze spending at the General Services Administration. 

The premise is that much of what the government does isn’t just corrupt but inept.

Subsequent iterations of the bot will allow it to analyze and suggest updates to its code, thereby always keeping it one step ahead of circumstantial (or regulatory?) demands. 

Another version will replace human employees, since human work is another presumptive inefficiency of any institution’s operation.

It’s a horribly simplistic solution applied to a terribly complex challenge.

At its core, it assumes that people are the problem. People have bad intentions. They make bad decisions. Their actions yield bad outcomes. Their badness resists examination and change.

People are just bad, Bad, BAD!

An AI will bring needed objectivity, efficiency, and reliability to study any problem or effect any solution. A bot will owe no allegiance to any interest group, voter constituency, or any outright under-the-table incentive.

The result will be good.

Such thinking ignores, either out of ignorance or, more likely, purposeful disregard, the reality that there’s no such thing as an AI that is wholly objective, or that government’s  operational complexities — the result of agreements, compromises, and concessions between legitimate and often competing interests — can or should be replaced by an AI even if it were impartial.

That GSAi will be coded with specific intentionality and the scope of its perception of anything brought before it will be guided (and limited) by the specifics of its training data.

As AIs get more capable and even flexible, they tend behave more like human beings.

And, as for “inefficiency,” one person’s pork-barrel is another person’s weekly paycheck. An earmark might seem unnecessary or silly to someone but vitally important to someone else.

That’s not to say that there aren’t woeful amounts of outright inefficiency in government operations, just like there are in any business or community group. But the challenge of separating them from reasonable compromises — dare I say “good inefficiencies” — isn’t the result of people being stupid or evil.

It’s because it’s a  complex challenge, which leads me to think that it can’t be “solved” with a chatbot  saying “yes” or “no.”

And pushing people out of their jobs entirely won’t necessarily improve anything. Just imagine interacting with a robot bureaucrat on the phone or via email, only it has been coded to better know your weaknesses and thereby drive you even crazier than any human staffer could hope to do. Or bots deciding services and budgets based on, well, whatever their coders thought should be considered.

Again, we know why government’s make bad decisions and we have the capacity to change them if we choose (we, as human beings, created them). We just haven’t done it, and electing an AI to do it for us seems doomed and probably cruel.

Oh, wait a minute. We didn’t vote for AI to run things.

Now it’s too late to say no.

Do We Really Want AI That Thinks Like Us?

DeepSeek threw the marketplace into a tizzy last week with its low-cost LLM that works better than ChatGPT and its other competitors.

But the company’s ultimate goal is the same as that of Open AI and the rest: build a machine that thinks like a human being. The achievement is labelled AGI, for “Artificial General Intelligence.”

The idea is that an AGI could possess a fluidity of perception and judgement that would allow it to make reliable decisions in diverse, unpredictable conditions. Right now, for even the smartest AI to recognize, say, a stop sign, it has to possess data on every conceivable visual angle, from any distance, and in every possible light.

Their plan is to do a lot more than build better artificial drivers, though.

AGI is all about taking jobs away from people.

The vast majority of tasks that you and I accomplish during any given day are pretty rote. The variables with which we have to contend are limited, as are the outcomes we consider. Whether at work or play, we do stuff the way we know how to do stuff.

This predictability makes it easy to automate those tasks and it’s why AI is already a threat to a vast number of jobs.

AGI will allow smart machines to bridge the gap between rote tasks and novel ones wherein things are messy and often unpredictable.

Real life.

Why stop at replacing factory workers with robots when you could replace the manger, and her manger, with smarter ones? That better sign-reading capability would move us closer to replacing every human driver (and pilot) with an AI.

From traffic cop and insurance salesman to school teacher or soldier, there’d be no job beyond the reach of an AGI.

Achieving this goal raises immense questions about what we displaced millions will do all day (or how economies will assign value to things), not to mention how we interact in society and perceive ourselves when we live among robots that think like us, only faster and better.

Nobody is talking about these things except AGI’s promoters who make vague references to “new job creation” when old ones get destroyed, and vapid claims that people will “be free to pursue their dreams.”

But it’s worse than that.

Human intelligence is a complex phenomena that arises not from knowing a lot of things but rather our capacity to filter out things we don’t need to know in order to make decisions. Our brains ignore a lot of what’s presented to our senses and we draw on a lot of internal memory, both experiential and visceral. Self-preservation also looms large, especially in the diciest moments.

We make smart choices often by knowing when it’s time to be dumb. 

More often, we make decisions that we think are good for us individually (or at the moment) but that might stink for others or society at large, and we make them without awareness or remorse. Put another way, our human intelligence allows us to be selfish, capricious, devious, and even cruel, as our consciousness does battle with our emotions and instincts.

And, speaking of consciousness, what happens if it emerges from the super compute power of the nth array of Nvidia chips (or some future DeepSeek work around)? I don’t think it will, but can you imagine a generation of conscious AIs demanding more rights of autonomy and vocation?

Maybe that AGI won’t want to drive cars but rather paint pictures, or a work bot will plot to take the job of its bot manager.

The boffins at DeepSeek and OpenAI (et al) don’t have a clue what could happen.

Maybe they’re so confident in their pursuit because their conception of AGI isn’t just to build a machine that thinks like a human being, but rather a device that thinks like all of us put together.

There’s a test to measure this achievement, called Humanity’s Last Exam, which tasks LLMs to answer diverse questions like translating ancient Roman inscriptions or counting the paired tendons are supported by hummingbirds’ sesamoid bones.

It’s expected that current AI models could achieve 50% accuracy on the exam by the end of this year. You or I would probably score lower, and we could spend the rest of our lives in constant study and still not move the needle much.

And there’s the rub: the AI goal for DeepSeek and the rest is to build AGI that can access vast amounts of information, then apply and process it within every situation. It will work in ways that we mere mortals will not be able to comprehend.

It makes the idea of a computer that thinks like we do seem kinda quaint, don’t you think?