Where’s The Counterpoint To AI Propaganda?

Reid Hoffman’s recent paean to the miracle of AI in the New York Times is just another reminder that we’re not having a real discussion about it.

The essay, which is behind a firewall, is entitled “AI Will Empower Humanity,” and argues that based on his past experience of technologies giving people more power and agency, and despite the risks inherent with any “truly powerful technologies,” that “…A.I. is on a path not just to continue this trend of individual empowerment but also to dramatically enhance it.”

What a bunch of fucking nonsense, for at least three reasons:

It’s unmitigated propaganda.

To be fair, the essay is an op-ed, which isn’t supposed to be balanced reportage. But his opinions aren’t given such exposure in the New York Times because of the force of his argument but rather his stature in the tech world. The resulting placement imputes that he possesses some sense of knowledge and authority.

He dismisses his bias in favor of AI with a single sentence, mentioning that he has “a significant personal stake in the future of artificial intelligence” but that “my stake is more than just financial,” and then goes on with his biased favoritism of AI.

Oh, and he’s shilling his new book.

His PR firm probably wrote the thing and maybe used a generative AI tool to do it. They certainly pitched it to the newspaper. Anybody with an opposing view probably doesn’t have the standing or mercenary economic purpose to enjoy such access.

His argument is crap.

In a sentence, Hoffman’s belief is that AI will make our lives not only easier but better and more reliable, and that these benefits will outweigh any concerns about it.

In his rich tech guy bubble, adoring oneself in the mirror of social media is “the coin of the realm,” whatever that means. He references Orwell’s 1984 when he claims that giving up anonymity and sharing more data improves people’s autonomy instead of limiting it.

His Orwellian reference is supposed to be a good thing.

Then, he reels off instances wherein AI will know more about us than we know ourselves, and that it will get between us and the world to mediate our every opinion and action. This way, we’ll always make the best possible decisions, as the dispassionate clarity of AI will replace “hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism.”

If only human beings behaved like smart machines, all would be well. We’ve heard similar arguments in economics and politics. It’s a scary pipe dream that he wants us to believe isn’t so scary.

But it’s still a pipe dream.

His strawman opposition is a farce.

Like other AI and tech evangelists, Hoffman smears people as “tech skeptics” and their opposition to AI as a worry it’s “a threat to personal autonomy,” and then goes on to provide examples of how losing said autonomy will be a good thing (the Orwell thing).

He also references the possibility of misuse of data by overzealous corporations or governments, but counters that individuals will have access to AI tools to combat such AI surveillance and potential manipulation.

Life as an incessant battle between AIs. Gosh, doesn’t that future sound like fun?

At least he doesn’t reference “Luddites,” which is a pejorative intended to dismiss people as maniacs with hammers in search of machines to smash (the caricature is far from the historical truth, but that’s the stuff of another essay).

And, thankfully, he doesn’t quote some fellow tech toff saying that the risk of AI is that it could destroy the world, which usually comes with some version of “please, stop me because the machines I’m building are too powerful” offhand boast.

The thing is there’s no organized or funded opposition to AI.

Even though every AI benefit he foresees will require profound, meaningful changes and trade-offs of personal agency, sense of self, how we relate to others, and what we perceive and believe. All of us sense that huge, likely irreversible changes are coming to our lives, yet there’s no discussion — whether referenced and challenged in his op-ed or anywhere else — about what it means and whether or not we want it.

We deserve thoughtful and honest debate, only instead we get one-sided puff pieces from folks who can afford to sell us one side of the story.

Where’s the counterpoint to AI propaganda?

AI That Thinks

The headline of a recent article at TechCrunch declared that an AI thinks in Chinese sometimes, though its coders can’t explain it.

It was misleading, at best, and otherwise just wrong.

The essay, “Open AI’s AI reasoning model ‘thinks’ in Chinese sometimes and no one really knows why,” recounted instances where the company’s super-charged GPT model (called “o1”) would occasionally incorporate Chinese, Persian, and other languages when crunching data in response to queries posed in English.

The company provided no explanation, which lead outsiders to ponder the possibilities that the model was going beyond its coded remit and purposefully choosing its language(s) on its own. The essay’s author closed the story with an admittedly great sentence:

“Short of an answer from OpenAI, we’re left to muse about why o1 thinks of songs in French but synthetic biology in Mandarin.”

There’s a slight problem lurking behind the article’s breezy excitement, though:

AIs don’t think.

The most advanced AI models process data according to how those processes are coded, and they’re dependent on the physical structure of their wiring. They’re machines that do things without any awareness of what they’re doing, no presence past the conduct of those actions.

AIs don’t “think” about tasks any more than blenders “think” about making milkshakes.

But that inaccurate headline sets the stage for a story, and subsequent belief, that AIs are choosing to do things on their own. Again, that’s a misnomer, at best, since even the most tantalizing examples of AIs accomplishing tasks in novel ways are the result of the alchemy of how they’ve been coded and built, however inscrutable it might appear.

Airplanes evidence novel behaviors in flight, as do rockets, in ways that threaten to defy explanation. So do markets and people’s health. But we never suggest that these surprises are the result of anything more than our lack of visibility into the cause(s), even if we fail to reach conclusive proof.

And the combination of data, how it’s labeled (annotated) and parsed (tokens), and then poured through the tangle of neural networks is a strange alchemy indeed.

But it’s science, not magic.

The causes for the use of different languages could be simply an artifact of the varied data sources on which the model is trained. It could also be due to which language provides relevant data in the most economical ways.

A research scientist noted in the article that there’s no way to know for certain what’s going on, “due to how opaque these models are.”

And that’s the rub.

OpenAI, along with its competitors, are in a race not only to build machines that will appear to make decisions as if they were human, though supposedly more accurately and reliably than us, but to thereafter make our lives dependent on them.

That means conditioning us to believe that AI can do things like think and reason, even if they can’t (and/or may never do), and claiming evidence of miracle-like outcomes with an explanatory shrug.

It’s marketing hype intended to promote OpenAI’s o1 models as the thoughful smarter cousins of its existing tools.

But what if all they’re doing is selling a better blender? 

Insane AI

If AIs use data produced by other AIs, it degrades their ability to make meaningful observations or reach useful conclusions.

In other words, they go insane.

The problem is called “AI model collapse” and it occurs when AIs like LLMS (ChatGPT et al) create a “recursive” loop of generating and consuming data. For those of us old enough to remember copiers, think copy of a copy of a copy.

The images get blurry and the text harder to read. The content itself become stupid, if not insane.

It’s an inevitable outcome of the fact that AI developers have all but used up most of the data available on the Internet to train their models, and the most rich sources of reliable stuff are now restricting said access.

And estimates range from some to lots of that available (or formerly so) data is already tainted by AI, whether by creation or translation. It’s no surprise since there are concerted efforts underway to generate AI content on purpose for the very purpose of training AIs: called “synthetic data,” it has been suggested that it could top the presence of “real data” by the end of the decade.

Just think of the potential for AIs to get stuff wrong by default or, worse, AIs used to purposely generate data to convince other AIs of something wrong or sinister. It will supercharge gaming a system of information that is already corrupt.

There are three most obvious ways to address this emergent problem:

First, and probably the most overt, will be the development of tools to try to stop or mitigate it. We’ll hear from the tech boffins about “guardrails” and “safeguards” that will make AI model collapse less likely or severe.

And, when something weird or scary happens, they’ll label it with something innocuous (I love the idea that AIs already making shit up is called “hallucinating”) and then come up with more AIs to police that new problem that also creates demands for money as it prompts more problems.

Second, and more insidiously, the boffins will continue to flood our lives with gibberish about how “free speech” means the unfettered sharing and amplifying of misunderstanding, falsehoods, and lies, which will further erode our ability to distinguish between sane or mad AI. After all, who’s to say people of color weren’t Nazis or other fictions weren’t historical fact (or visa versa)?

Slowly, we’re being conditioned to see bias and inaccuracies as artifacts of opinion or process, not facts. An insane AI may well be viewed as no worse than our friends and family members whose ideas and beliefs are demonstrably wrong or nutty (though utterly right to them).

Third, and least likely, is that regulators could step up and do something about it, like demand that the data used for training AIs is vetted and maybe even certified fresh, or good, or whatever.

We rely on government to help ensure that hamburgers aren’t filled with Styrofoam and prescription drugs aren’t made with strychnine. Cars must meet some bare minimum safety threshold, as do buildings under construction, etc.

How is it that there are no such regulatory criteria for training AI models?

Maybe they just don’t understand it. Maybe they’re too scared to impede “innovation” or some other buzzword that they’ve been sold. Maybe they’ve asked ChatGPT for its advice and were told that there’s nothing to worry about.

Whatever the cause, the fact that we’re simply watching Als slowly descend into insanity is simply mad.

Your AI Shopping List At CES?

Many of the 4,500 exhibitors at this year’s Consumer Electronics Show will talk about AI, according to the annual event’s organizer.

Only there won’t be any AI gizmos on display, since AI isn’t some “thing” we consumers can or will buy. AI isn’t a product, per se. It’s an enabler, an ingredient, a component of electronic devices, albeit a potentially immense one.

So, the show will be all about putting AI into the devices consumers use and why that’ll be a good thing.

We won’t have a choice about it.

From products to ideas

Trade shows used to be the best and perhaps only way for manufacturers to sell their stuff to distributors and retailers. Sure, there was always an element of “what if” presented to jazz up exhibits (think concept cars at auto shows), but success was measured in the number and value of written purchase orders.

The Internet killed most of those events, even though displaying products online and purchasing them at the push of a button obviated the need for boozy dinners on a company’s dime.

CES survived because it provided a last stand for all that happy schmoozing, but more so because it shifted its focus from facilitating sales of today’s gizmos to promoting fantasies of what tomorrow’s offering might look like.

It’s one big PR stunt intended to nudge media, financial analysts, and influencers of all shapes and sizes to embrace a shared expectation for the future. It’s also where companies scare and dare one another to commit to those outcomes.

CES aims to present a view of the world that rises to the level of self-fulfilling prophecy. Rest assured that the media coverage of it this week will glowing reflect that certainty and its subtle impacts will be felt in the how AI is talked about until, well, next year’s event.

Or so that’s the game plan.

The problem with predicting the future

If past visions of our tech future were any good, we’d be buzzing around on our personal jetpacks and have family and friends living in orbit and on the Moon.

Not only are most predictions imperfectly realized, if at all, but they usually miss all the ugly side-effects that’ll come with them. Just imagine if the little cars buzzing on wide-open freeways in the Futurama exhibit at the 1939 World’s Fair had been bathed in exhaust haze.

I was a frequent CES attendee over the past 25 years, and they got the future prediction thing consistently wrong. I’m reminded of years of promises that homes would be “smart” and, more recently, that cars would drive themselves.

From the promotion of somewhat reasonable sharper and ever-larger TV screens to the silly can opener/fly-fishing gizmos, my overriding takeaway was to wonder who asked for this crap?

On the topic of their latest infatuation with AI, the answer remains nobody…except the companies and investors who hope to make a killing on it, just like every other prediction they’ve tried to make come true. Partial and/or delayed success is still a win.

There’s no demand for this stuff, only supply looking for an outlet.

Their AI shopping list

I’m relieved not to be at this year’s show (the past few have been memorable mostly as COVID super-spreader events), but I can imagine the aisles will be filled with endless iterations of AI making so-and-so product better/faster/cheaper and thereby providing consumers with ease, efficiency, and/or “value” (a consultingese bugaboo term that has no meaning whatsoever).

Nvidia’s founder and CEO will keynote the festivities. You can just imagine where it’ll go from there.

And it won’t matter if their rosy predictions are inaccurate, incomplete, or don’t get realized at a speed and scope that matches their aspirations.

CES is a statement of purpose:

Manufacturers, their suppliers and consultants, a global distribution system and an entire ecosystem of investors and shareholders are challenging each other to make this AI future happen.

It’s not our shopping list…it’s theirs.

Watch that Futurama film again and ask yourself a question that should be top of mind for us this week:

Where are the people?