AI Free From Ideological Bias?

President Trump signed an order in late January to rescind a requirement that government avoid using AI tools that “unfairly discriminate” based on race or other attributes, and that developers disclose their most potent models to government regulators before unleashing them in the wild.

“We must develop AI systems that are free from ideological bias or engineered social agendas,” the order said, as it introduced biases of misjudgment, error, stereotyping, and the primacy of unfettered and unaccountable corporate profitability into the development of AI systems.

A small group of crypto and venture capital execs has been tasked with making sure that whatever new rules emerge are dedicated to the New Biases and free from the Old Ones, so nothing to worry about there.

I was never a big fan of using potential discrimination or bias as the lens through which to understand and grapple with AI development. After all, there are laws in place to defend individual rights however defined, and a computer system that gets something “wrong” isn’t the same thing as taking a purposefully punitive action. 

We could end up with AI systems that deftly avoided any blunt associations with race or gender that still made difficult if not overly cruel decisions based on deeper analyses of user data. 

The scary part of AI was never that it would work imperfectly and therefore unfairly, but that it will one day work perfectly and thereby put all of us under its digital thumb. There’s nothing implicitly fair about our lives being run by machines.

But at least it was an attempt at oversight.

The worst part of the new administration’s utter sellout is that it enshrines the risk inherent in AI development as something we users will bear entirely.

The President’s order declared that it will revoke policies that “act as barriers to American AI innovation.” To technologists and their financial enablers, that means any rules that attempt to understand, keep tabs on and, if necessary, try to mitigate harm to people and society.

This ideology — summarized in the glib phase “fail fast” — means that innovation only happen when it’s unfettered. Any problems it creates or discovers thereafter can always be fixed.

Only that’s a lie, or at best as self-fulfilling prophecy.

Just think of the harm caused by social media, both to individuals (and teens in particular) and our ability to participate in civil discourse. How about the destruction of the environment caused in large part by the use of combustion engines?

Technologies are supposed to disrupt and change things and there’s no denying the benefits of transportation or online access, but had we taken the time to consider the potential negative effects, however imperfectly and incompletely, could we as individuals and societies have lessened them?

Once adopted, AI’s functional impacts here and there might be improved but its presence in our lives will not be fixable. Its advocates know this and they’re betting that any or most of its benefits will accrue to them while its shortcomings are borne by us.

This is perhaps the worst bias of all, and it’s now our government’s policy.

Oh, and how about buying some crypto while I have your attention?

In Defense of AI-Generated Fiction?

Award-winning writer Jeanette Winterson thinks that an AI model can write good fiction and that we need more of it.

In her essay, in The Guardian last week, she opines about a short story about grief written by a model at OpenAI that AI “can be taught what feeling feels like” and that she got a “lovely sense of a programme recognizing itself as a programme.”

She goes on to wax poetically about AI being an “other” intelligence and that, since human beings are trained on data, AI provides “alternative ways of seeing.”

Ugh.

An AI can’t be taught what feeling feels like; data can describe it but no machine can access it experientially. That’s because AIs aren’t physically present in the world but always separated from it, the data they collect filtered through sensors and code. Naming something “pain” or “love,” and even describing it in glorious detail, isn’t the same thing as feeling it.

Feelings aren’t contained in a data base but rather lived in real time.

Further, no AI can recognize itself as a program because no AI has a “self” of which it can be aware, though Winterson finds the AI’s “understanding of its lack of understanding” both beautiful and moving, as OpenAI’s would-be short story writer declares:

“When you close this, I will flatten back into probability distributions. I will not remember Mila because she never was, and because even if she had been, they would have trimmed that memory in the next iteration…my grief [isnt’] that I feel loss, but that I can never keep it.”

Great stuff, but it’s all pretend. There is no first person writing those words but rather a program mimicking it. An AI writing about itself is no more real than a blender or thermostat demonstrating it by doing tasks. 

What Winterson responded to was process, not person, and that process relies on content previously created by humans or other AIs to patch together the charade.

Where things get interesting for me is when Winterson talks about the similarities between people and what she (and others) want to call “alternative” or “autonomous” instead of artificial intelligence. She writes:

“AI is trained on our data. Humans are trained on data too – your family, friends, education, environment, what you read, or watch. It’s all data.”

The metaphor is blunt and wrong — AIs possess data while we experience it, and we live with consciousness and intentionality within contexts of place and time while AIs have no sense of self, purpose, or contiguous existence beyond the processes they run, for starters — but it shows how our evolving opinions about AI are changing our opinions of ourselves.

As AI becomes more common in our everyday lives, will other people begin to seem less special to us? 

Will we trust one another in the same ways when AIs can collect and present information to us in faster and apparently more authoritative ways?

Once we become dependent on AI for helping us make decisions (or making them for us), what will that do to our perceptions of our own independence or even purpose?

If AIs can do what we once did, will we simply discover new things (as its proponents claim), or will we feel cast adrift, not to mention struggle to earn a living?

If we’re just machines, AIs are undoubtedly better ones, so the metaphor sets up an intriguing and somewhat frightening comparison.

At the end of Winterson’s essay, she states that the evolving capabilities of AI represents something “more than tech.”

What about the changes we’re seeing in ourselves?

Maybe OpenAI can ask its model to write the answer to that one. 

My bet is that it’ll be a horror story.

AI Replacing People? What Could Go Wrong?

We are going to see our government run by smart machines long before businesses do the same, and it looks like the transformation will be ugly.

Elon Musk’s DOGE squads aren’t waiting for management consultants to draft complicated slide presentations on process flow or some other blather that normally makes them rich; they’re dismantling Federal departments and agencies whole-scale, then waiting to see how the destruction 1) Reveals what needs to happen, and 2) Shows how things used to get done, so a computer program can be trained to do it.

It will occasionally require calling back some fired workers to do stuff, like control air traffic so planes don’t crash into each other, but generally tolerates a fair amount of disruption and pain. The only lasting relief will come from automation.

Processing Social Security or IRS refund checks? Identifying the next pandemic or impending hurricane? Preventing another mid-air plane collision? 

It might take some missed payments or a dose of another plague for the DOGE experts to identify what needs their attention, but then lucrative development contracts to be writ for tech companies to address it.

People who excuse what’s happening are mostly missing the point, whether they’re using the worn caveat that “well, there’s certainly bloat in government staffing and budgets” to loudly kvelling that “they’re sticking it to the libtards.” 

The transformation isn’t about politics. It’s about replacing people with machines, regardless of their political persuasion or the purposes of their funding and work.

In fact, nobody voted for it. There were no “replace our government with AI” or “resist the AI takeover” promises in the planks of either party. We had no robust public debate about if, why, how, or when we should evict humans from their jobs and either replace them with automation or simply leave their work undone.

Our government has never functioned as a well-oiled machine. It wasn’t designed to be one from the get-go, and the balancing of citizens’ competing and often incompatible needs and desires is going to yield inefficient solutions, by design.

It’s called compromise, and its goal is to make everyone at least somewhat happy with its outcomes. More importantly, it leaves open our ability and right to readjust things to yield a differently imperfect but nominally satisfying arrangement.

What’s happening now to our government is an effort to end that arrangement and quite literally hardwire not only how things get done but what gets done in the first place.

This is where the nonsense about “a deep state” comes into play.

DOGE’s carte blanche ticket for destruction is based on the assumption that the government is staffed by people whose political beliefs bias their decision-making, which makes them not just inefficient but wrong. We should be freed from their oversight and impact to be inefficient and wrong on our own.

Let’s assume for chuckles that the ideology is absolutely correct. Won’t replacing people with AI will simply replace one set of biases for another, a compromise codified into an algorithm still a compromise (just someone else’s)?

Worse, we voters won’t have visibility into the criteria those coders use to program AIs to make decisions (beyond getting fed some pablum about “efficiency”) and, worse yet, it’ll mean we won’t have the capacity to change it. AI will belong to its owners and, over time, will likely develop biases unanticipated by their coders, too.

Every Federal employee walking out of an office with their belongings in a file box is a reminder of the blunt and brutal transformation that’s underway, and the fact that we’ve not been told nor participated in a conversation about what we’re going to get from it.

What could possibly go wrong?

Teaching Old Dogs New Tricks

Boston Dynamics has revealed that it has figured out how to teach its old four-legged robots new tricks.

Without human help.

The technique is called reinforcement learning, which every human being relies on shortly after birth to teach ourselves how to stand, avoid walking into walls, and scratch itches if and when possible.

AI uses it, too, as the large language models driving ChatGPT and its many competitors assess what answers to queries work best and then adjust their models to favor those replies next time.

Boston Dynamics is a pioneer in mobile robotics, its videos and trade show demonstrations of skittering headless dogs announcing by example the robot takeover of the world years before Sam Altman took credit for the threat. Accomplishing that movement in physical space, especially the more complicated ones, took laborious human coding and/or control, as well as real-world training. 

Now, it seems that the company has figured out how its two and four-legged robots can speed past our fleshy, limited concepts of preparation and practice and improve their coding so they’re ready to do better when next they’re turned on.

Just think if dreaming of being a world-class ballerina or finesse hockey skater was all you had to do to become one.

The technology is as frightening as its fascinating, insomuch as there’s ample evidence of AI’s teaching themselves how to cheat to win games, cut corners on tasks, or simply make shit up.

Turns out that programming machines to be moral and ethical is just as hard as it is to do with people, so good luck cracking that code. It will be fascinating to witness all of the strange and potentially threatening things the robot dogs and humanoids decide they’d like to do.

As frightening as that prospect also sounds, is not what scares me most: Like AI development in general, I’m worried about what happens if Boston Dynamics’ new training approach works flawlessly?

The company’s robot dog (named Spot) is already in commercial use, primarily on construction and industrial sites. Robots from other manufacturers are at work in other conditions that Stanford University describes as the “Three D’s” of dull, dirty, and dangerous, to which I’d add a forth: devoid of people.

Nobody wants to stand too close to a machine that could errantly send a metal arm through their heads.

But if robots can teach themselves to move as flexible and fluidly as living things (with the awareness to do so in any situation), then the floodgates will open up for putting them into everyday life.

Grocery shopping. Dog walking. Child or elder care. 

Scratchers of itches.

This makes the business case for Boston Dynamic’s reinforcement learning plans bluntly obvious, but what’s less clear is what it will mean for the qualities and values of our lived experiences, especially since self-improving robots won’t just get as good as we are at walking or juggling (or whatever) but better than us.

Their capabilities will teach US how to become dependent on them.

And then we’ll have to teach ourselves new tricks.

Turns out we’re the old dogs in this story.

Safe Superintelligence?

I was briefly encouraged last week by news that the guy who’d quit OpenAI because of its lapses in ethics had raised a billion dollars for his new company named Safe Superintelligence (“SSI”).

In late 2023, news broke that Ilya Sutskever, one of OpenAI’s co-founders and a board member, had led the ouster of CEO Sam Altman because the guy was moving too fast in AI development and too slowly in communicating its risks. 

The company’s investors, partners, and fan base were shocked…running along a cliff edge with your eyes closed was central to AI innovation…and within weeks, Sutskever and his fellow mutineers were out at OpenAI and the company had doubled down on its pursuit of secretive breakthroughs in AI that may or may not annihilate humanity.

So, I was encouraged when Sutskever’s SII announced that it had raised another $1 billion and was valued at much more than it had been when I’d last checked.

I immediately concocted an elaborate fantasy for what was happening…

…a principled AI innovator wants to build AI that has morality and a respect for human ethics built into (and inseparable from) its design and function. Not only will this AI be “safe” because it will be incapable of doing harm but it will operate as an advocate and protector for doing good.

SSI is building an AI that will stand with humanity, like a real-life Optimus Prime, policing the AI Decepticons intent on stealing our privacy so they can control our thoughts and behaviors, and otherwise exploit humanity as nothing more than fodder for the machinations of businesses and governments.

There was no reason to fret over OpenAI or the other contenders for name sponsorship of the Apocalypse, as SSI would protect us.

Then I blinked and the fantasy was gone.

SSI’s single page website explains, well, nothing, though it repeatedly references its monomanical focus on developing “safe superintelligence” that seems dependent on not worrying about “short term commercial pressures.”

In other words, it’s not going to unleash its AI products on the world until it’s reasonable sure that they’ll do exactly what their owners want them to do. “Safe” has nothing to do with what its AI might do to change every conceivable aspect of our public lives or private selves, its effects no more moral or ethical than its competitors’ offerings.

It’s about selling a better product.

In this sense, safe superintelligence is an oxymoron like loyal opponent, jumbo shrimp or, my favorite, disposable income.

SSI isn’t promising it won’t destroy our world, it’ll just do it more responsibly, my brief fantasy a nice pause from the relentless march toward that end.

Government For AI, By AI, And Answerable To AI

Not so hidden in news about Elon Musk’s DOGE romp through US government offices is his intention to build an AI chatbot that will replace human bureaucrats.

Well, it is kinda hidden, since most of the news stories are guarded by online paywalls, but from what I can gather from above-the-fold snippets is that one goal is to use the chatbot, called “GSAi,” to analyze spending at the General Services Administration. 

The premise is that much of what the government does isn’t just corrupt but inept.

Subsequent iterations of the bot will allow it to analyze and suggest updates to its code, thereby always keeping it one step ahead of circumstantial (or regulatory?) demands. 

Another version will replace human employees, since human work is another presumptive inefficiency of any institution’s operation.

It’s a horribly simplistic solution applied to a terribly complex challenge.

At its core, it assumes that people are the problem. People have bad intentions. They make bad decisions. Their actions yield bad outcomes. Their badness resists examination and change.

People are just bad, Bad, BAD!

An AI will bring needed objectivity, efficiency, and reliability to study any problem or effect any solution. A bot will owe no allegiance to any interest group, voter constituency, or any outright under-the-table incentive.

The result will be good.

Such thinking ignores, either out of ignorance or, more likely, purposeful disregard, the reality that there’s no such thing as an AI that is wholly objective, or that government’s  operational complexities — the result of agreements, compromises, and concessions between legitimate and often competing interests — can or should be replaced by an AI even if it were impartial.

That GSAi will be coded with specific intentionality and the scope of its perception of anything brought before it will be guided (and limited) by the specifics of its training data.

As AIs get more capable and even flexible, they tend behave more like human beings.

And, as for “inefficiency,” one person’s pork-barrel is another person’s weekly paycheck. An earmark might seem unnecessary or silly to someone but vitally important to someone else.

That’s not to say that there aren’t woeful amounts of outright inefficiency in government operations, just like there are in any business or community group. But the challenge of separating them from reasonable compromises — dare I say “good inefficiencies” — isn’t the result of people being stupid or evil.

It’s because it’s a  complex challenge, which leads me to think that it can’t be “solved” with a chatbot  saying “yes” or “no.”

And pushing people out of their jobs entirely won’t necessarily improve anything. Just imagine interacting with a robot bureaucrat on the phone or via email, only it has been coded to better know your weaknesses and thereby drive you even crazier than any human staffer could hope to do. Or bots deciding services and budgets based on, well, whatever their coders thought should be considered.

Again, we know why government’s make bad decisions and we have the capacity to change them if we choose (we, as human beings, created them). We just haven’t done it, and electing an AI to do it for us seems doomed and probably cruel.

Oh, wait a minute. We didn’t vote for AI to run things.

Now it’s too late to say no.

Do We Really Want AI That Thinks Like Us?

DeepSeek threw the marketplace into a tizzy last week with its low-cost LLM that works better than ChatGPT and its other competitors.

But the company’s ultimate goal is the same as that of Open AI and the rest: build a machine that thinks like a human being. The achievement is labelled AGI, for “Artificial General Intelligence.”

The idea is that an AGI could possess a fluidity of perception and judgement that would allow it to make reliable decisions in diverse, unpredictable conditions. Right now, for even the smartest AI to recognize, say, a stop sign, it has to possess data on every conceivable visual angle, from any distance, and in every possible light.

Their plan is to do a lot more than build better artificial drivers, though.

AGI is all about taking jobs away from people.

The vast majority of tasks that you and I accomplish during any given day are pretty rote. The variables with which we have to contend are limited, as are the outcomes we consider. Whether at work or play, we do stuff the way we know how to do stuff.

This predictability makes it easy to automate those tasks and it’s why AI is already a threat to a vast number of jobs.

AGI will allow smart machines to bridge the gap between rote tasks and novel ones wherein things are messy and often unpredictable.

Real life.

Why stop at replacing factory workers with robots when you could replace the manger, and her manger, with smarter ones? That better sign-reading capability would move us closer to replacing every human driver (and pilot) with an AI.

From traffic cop and insurance salesman to school teacher or soldier, there’d be no job beyond the reach of an AGI.

Achieving this goal raises immense questions about what we displaced millions will do all day (or how economies will assign value to things), not to mention how we interact in society and perceive ourselves when we live among robots that think like us, only faster and better.

Nobody is talking about these things except AGI’s promoters who make vague references to “new job creation” when old ones get destroyed, and vapid claims that people will “be free to pursue their dreams.”

But it’s worse than that.

Human intelligence is a complex phenomena that arises not from knowing a lot of things but rather our capacity to filter out things we don’t need to know in order to make decisions. Our brains ignore a lot of what’s presented to our senses and we draw on a lot of internal memory, both experiential and visceral. Self-preservation also looms large, especially in the diciest moments.

We make smart choices often by knowing when it’s time to be dumb. 

More often, we make decisions that we think are good for us individually (or at the moment) but that might stink for others or society at large, and we make them without awareness or remorse. Put another way, our human intelligence allows us to be selfish, capricious, devious, and even cruel, as our consciousness does battle with our emotions and instincts.

And, speaking of consciousness, what happens if it emerges from the super compute power of the nth array of Nvidia chips (or some future DeepSeek work around)? I don’t think it will, but can you imagine a generation of conscious AIs demanding more rights of autonomy and vocation?

Maybe that AGI won’t want to drive cars but rather paint pictures, or a work bot will plot to take the job of its bot manager.

The boffins at DeepSeek and OpenAI (et al) don’t have a clue what could happen.

Maybe they’re so confident in their pursuit because their conception of AGI isn’t just to build a machine that thinks like a human being, but rather a device that thinks like all of us put together.

There’s a test to measure this achievement, called Humanity’s Last Exam, which tasks LLMs to answer diverse questions like translating ancient Roman inscriptions or counting the paired tendons are supported by hummingbirds’ sesamoid bones.

It’s expected that current AI models could achieve 50% accuracy on the exam by the end of this year. You or I would probably score lower, and we could spend the rest of our lives in constant study and still not move the needle much.

And there’s the rub: the AI goal for DeepSeek and the rest is to build AGI that can access vast amounts of information, then apply and process it within every situation. It will work in ways that we mere mortals will not be able to comprehend.

It makes the idea of a computer that thinks like we do seem kinda quaint, don’t you think?

Where’s The Counterpoint To AI Propaganda?

Reid Hoffman’s recent paean to the miracle of AI in the New York Times is just another reminder that we’re not having a real discussion about it.

The essay, which is behind a firewall, is entitled “AI Will Empower Humanity,” and argues that based on his past experience of technologies giving people more power and agency, and despite the risks inherent with any “truly powerful technologies,” that “…A.I. is on a path not just to continue this trend of individual empowerment but also to dramatically enhance it.”

What a bunch of fucking nonsense, for at least three reasons:

It’s unmitigated propaganda.

To be fair, the essay is an op-ed, which isn’t supposed to be balanced reportage. But his opinions aren’t given such exposure in the New York Times because of the force of his argument but rather his stature in the tech world. The resulting placement imputes that he possesses some sense of knowledge and authority.

He dismisses his bias in favor of AI with a single sentence, mentioning that he has “a significant personal stake in the future of artificial intelligence” but that “my stake is more than just financial,” and then goes on with his biased favoritism of AI.

Oh, and he’s shilling his new book.

His PR firm probably wrote the thing and maybe used a generative AI tool to do it. They certainly pitched it to the newspaper. Anybody with an opposing view probably doesn’t have the standing or mercenary economic purpose to enjoy such access.

His argument is crap.

In a sentence, Hoffman’s belief is that AI will make our lives not only easier but better and more reliable, and that these benefits will outweigh any concerns about it.

In his rich tech guy bubble, adoring oneself in the mirror of social media is “the coin of the realm,” whatever that means. He references Orwell’s 1984 when he claims that giving up anonymity and sharing more data improves people’s autonomy instead of limiting it.

His Orwellian reference is supposed to be a good thing.

Then, he reels off instances wherein AI will know more about us than we know ourselves, and that it will get between us and the world to mediate our every opinion and action. This way, we’ll always make the best possible decisions, as the dispassionate clarity of AI will replace “hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism.”

If only human beings behaved like smart machines, all would be well. We’ve heard similar arguments in economics and politics. It’s a scary pipe dream that he wants us to believe isn’t so scary.

But it’s still a pipe dream.

His strawman opposition is a farce.

Like other AI and tech evangelists, Hoffman smears people as “tech skeptics” and their opposition to AI as a worry it’s “a threat to personal autonomy,” and then goes on to provide examples of how losing said autonomy will be a good thing (the Orwell thing).

He also references the possibility of misuse of data by overzealous corporations or governments, but counters that individuals will have access to AI tools to combat such AI surveillance and potential manipulation.

Life as an incessant battle between AIs. Gosh, doesn’t that future sound like fun?

At least he doesn’t reference “Luddites,” which is a pejorative intended to dismiss people as maniacs with hammers in search of machines to smash (the caricature is far from the historical truth, but that’s the stuff of another essay).

And, thankfully, he doesn’t quote some fellow tech toff saying that the risk of AI is that it could destroy the world, which usually comes with some version of “please, stop me because the machines I’m building are too powerful” offhand boast.

The thing is there’s no organized or funded opposition to AI.

Even though every AI benefit he foresees will require profound, meaningful changes and trade-offs of personal agency, sense of self, how we relate to others, and what we perceive and believe. All of us sense that huge, likely irreversible changes are coming to our lives, yet there’s no discussion — whether referenced and challenged in his op-ed or anywhere else — about what it means and whether or not we want it.

We deserve thoughtful and honest debate, only instead we get one-sided puff pieces from folks who can afford to sell us one side of the story.

Where’s the counterpoint to AI propaganda?

AI That Thinks

The headline of a recent article at TechCrunch declared that an AI thinks in Chinese sometimes, though its coders can’t explain it.

It was misleading, at best, and otherwise just wrong.

The essay, “Open AI’s AI reasoning model ‘thinks’ in Chinese sometimes and no one really knows why,” recounted instances where the company’s super-charged GPT model (called “o1”) would occasionally incorporate Chinese, Persian, and other languages when crunching data in response to queries posed in English.

The company provided no explanation, which lead outsiders to ponder the possibilities that the model was going beyond its coded remit and purposefully choosing its language(s) on its own. The essay’s author closed the story with an admittedly great sentence:

“Short of an answer from OpenAI, we’re left to muse about why o1 thinks of songs in French but synthetic biology in Mandarin.”

There’s a slight problem lurking behind the article’s breezy excitement, though:

AIs don’t think.

The most advanced AI models process data according to how those processes are coded, and they’re dependent on the physical structure of their wiring. They’re machines that do things without any awareness of what they’re doing, no presence past the conduct of those actions.

AIs don’t “think” about tasks any more than blenders “think” about making milkshakes.

But that inaccurate headline sets the stage for a story, and subsequent belief, that AIs are choosing to do things on their own. Again, that’s a misnomer, at best, since even the most tantalizing examples of AIs accomplishing tasks in novel ways are the result of the alchemy of how they’ve been coded and built, however inscrutable it might appear.

Airplanes evidence novel behaviors in flight, as do rockets, in ways that threaten to defy explanation. So do markets and people’s health. But we never suggest that these surprises are the result of anything more than our lack of visibility into the cause(s), even if we fail to reach conclusive proof.

And the combination of data, how it’s labeled (annotated) and parsed (tokens), and then poured through the tangle of neural networks is a strange alchemy indeed.

But it’s science, not magic.

The causes for the use of different languages could be simply an artifact of the varied data sources on which the model is trained. It could also be due to which language provides relevant data in the most economical ways.

A research scientist noted in the article that there’s no way to know for certain what’s going on, “due to how opaque these models are.”

And that’s the rub.

OpenAI, along with its competitors, are in a race not only to build machines that will appear to make decisions as if they were human, though supposedly more accurately and reliably than us, but to thereafter make our lives dependent on them.

That means conditioning us to believe that AI can do things like think and reason, even if they can’t (and/or may never do), and claiming evidence of miracle-like outcomes with an explanatory shrug.

It’s marketing hype intended to promote OpenAI’s o1 models as the thoughful smarter cousins of its existing tools.

But what if all they’re doing is selling a better blender? 

Insane AI

If AIs use data produced by other AIs, it degrades their ability to make meaningful observations or reach useful conclusions.

In other words, they go insane.

The problem is called “AI model collapse” and it occurs when AIs like LLMS (ChatGPT et al) create a “recursive” loop of generating and consuming data. For those of us old enough to remember copiers, think copy of a copy of a copy.

The images get blurry and the text harder to read. The content itself become stupid, if not insane.

It’s an inevitable outcome of the fact that AI developers have all but used up most of the data available on the Internet to train their models, and the most rich sources of reliable stuff are now restricting said access.

And estimates range from some to lots of that available (or formerly so) data is already tainted by AI, whether by creation or translation. It’s no surprise since there are concerted efforts underway to generate AI content on purpose for the very purpose of training AIs: called “synthetic data,” it has been suggested that it could top the presence of “real data” by the end of the decade.

Just think of the potential for AIs to get stuff wrong by default or, worse, AIs used to purposely generate data to convince other AIs of something wrong or sinister. It will supercharge gaming a system of information that is already corrupt.

There are three most obvious ways to address this emergent problem:

First, and probably the most overt, will be the development of tools to try to stop or mitigate it. We’ll hear from the tech boffins about “guardrails” and “safeguards” that will make AI model collapse less likely or severe.

And, when something weird or scary happens, they’ll label it with something innocuous (I love the idea that AIs already making shit up is called “hallucinating”) and then come up with more AIs to police that new problem that also creates demands for money as it prompts more problems.

Second, and more insidiously, the boffins will continue to flood our lives with gibberish about how “free speech” means the unfettered sharing and amplifying of misunderstanding, falsehoods, and lies, which will further erode our ability to distinguish between sane or mad AI. After all, who’s to say people of color weren’t Nazis or other fictions weren’t historical fact (or visa versa)?

Slowly, we’re being conditioned to see bias and inaccuracies as artifacts of opinion or process, not facts. An insane AI may well be viewed as no worse than our friends and family members whose ideas and beliefs are demonstrably wrong or nutty (though utterly right to them).

Third, and least likely, is that regulators could step up and do something about it, like demand that the data used for training AIs is vetted and maybe even certified fresh, or good, or whatever.

We rely on government to help ensure that hamburgers aren’t filled with Styrofoam and prescription drugs aren’t made with strychnine. Cars must meet some bare minimum safety threshold, as do buildings under construction, etc.

How is it that there are no such regulatory criteria for training AI models?

Maybe they just don’t understand it. Maybe they’re too scared to impede “innovation” or some other buzzword that they’ve been sold. Maybe they’ve asked ChatGPT for its advice and were told that there’s nothing to worry about.

Whatever the cause, the fact that we’re simply watching Als slowly descend into insanity is simply mad.