AI is a Tulip Crossed With An Edsel?

Assume for a moment that every naysayer is exactly right, and AI is the biggest, dumbest economic and social bubble in history of big, dumb bubbles. Its promises are part tulip and part Edsel. A fad mated with a clunker.

It’s still going to erase our existing way of life.

The change is certain, as it’s built into the purpose and applications of AI that we already use as well as imagine. The only uncertainty is when will we – the Great Unwashed who tech titan and AI profiteer Eric Schmidt recently called “normal people” – realize that our world is fundamentally and irrevocably different.

The change is not going to become apparent in specific corporate revenue or earnings reports.

Oddly, academics and consultants are still struggling to find proof in dollar signs for swapping out paper ledgers for digital tools, and such digitalization has been underway for almost a quarter century.

What evades their view is that digitalization has already fundamentally changed businesses and the markets and services that support them (suppliers of capital, materiel, and workforces, for starters). Decisions are better and made faster. Systems run more efficiently and reliably, and problems are identified sooner and often before they even happen.

This transformation touches everyone so profoundly that few people even recognize how different things are today from a generation ago.

The change from AI will also not become apparent from some “aha” announcement that a robot has become conscious (the toffs call the achievement “AGI,” for Artificial General Intelligence). We can’t even explain how consciousness functions or where it resides in humans.

We’re “self-aware” but does that require a self to observe ourselves? After a few thousand years of research and debate, the answer only gets clear after a night of drinking or smoking weed (and promptly disappear the next morning).

AI researchers operate under the false assumption that their machines are silicon version of brains and that something magical will happen when they make said machines complicated enough to do magic.

But AI doesn’t need consciousness or AGI to transform the world, any more than we humans need it to function within it. Most decisions require relevant information, criteria for assessing it, and context in which to place it, full stop.

Deep pondering about the meaning of work or parenting (or just getting through the day)? It happens, but often after-hours and with the assistance of the afore-mentioned booze or weed.

Ditto for the merits of waiting for the next publicity stunt at which a robot can walk on two legs, has two arms, and conjures up memories of a terminator. AI doesn’t need to wait for a body – or ever possess one – to get things done.

Just ask HAL9000.

No, the AI erasure of the old world and its replacement with a new one is and will happen gradually, with evidence of its progress hiding in the oddest ways and places.

For instance, here’s a recent story reporting that a factory robot was coded to try and convince a dozen other robots to go on strike.

It succeeded.

Here’s a story about researchers who believe we should prepare to give AIs rights, including that of survival and well-being. Their research paper is here.

And then there’s Eric Schmidt, who appeared at one of many learned conferences at which learned people harumph their way through convoluted and stillborn narratives about AI, to say that we “normal people” aren’t prepared for what AI is going to do to…I mean for…us.

Maybe our current approach to seeing AI as like the tulip or South Sea crazes, or an imperfect technical tool like an Edsel or laser disc, is indeed our era’s latest bubble, or bubbles.

I think the difference is that AIt going away; rather, they’re going to keep popping up in the strangest and often most interesting places, many of which will evade our attention or understanding.

So, the naysayers are right. They, and we, can’t see how AI is erasing our world and replacing it with something new.

AI Moves In On Poetry

Not only can AI write in the style of famous poets, but it can also improve on their work, according to this story.

Researchers at the University of Pittsburg asked ChatGPT-3.5 to create poems that would appear to have come from writers like Chaucer, Shakespeare, Dickinson, and Plath. Then, they shuffled the results with actual poems from those artists and asked non-expert readers – i.e. people who didn’t necessarily read poetry much – which ones were written by humans and which ones they liked.

The results weren’t dramatic, but a trend did emerge: They thought the AI poems had been written by people and they preferred them to the real thing. Here is the research abstract.

The deck was (and will always be) stacked against humans when it comes to competing with AI that can train on what artists say and do.

For starters, AI can understand a methodology, style, and tone better than any person by accessing more direct and related data and compiling processes that explicitly provide guidelines that artists only know implicitly and incompletely.

Shakespeare had no “Writing Shakespeare For Dummies” to follow. He couldn’t mimic himself as well as an AI could even if he tried.

This means also that the AI poems in the study didn’t cover new ground in terms of ideas, per se, but rather presented variations on those themes. It’s as if the Walt Whitman’s poetry was the draft for the AI to refine and present. Doing something better isn’t the same thing as doing something different.

Intentionality also played a part in the research, as we can debate what Allen Ginsberg or Dorothea Lasky wanted to communicate in their poems because, well, that’s because metaphors, analogies, nuances, and sometimes outright cognitive dissonance are legit tools of the trade.

Poems can “say” many things and/or say them in ways that aren’t easily grasped, which is probably why many of the test participants weren’t poetry readers.

The AI used in the research had no such complex relationship with its art or audience; in fact, it possessed a bias toward synthesizing and delivering ideas in ways that made them more understandable.

Think translator more than creator.

AI-generated content that appears to be “in the style of” other content is already a fact of life in news and the arts, copyright lawsuits notwithstanding. But this research shows that there’s truly no way to differentiate it from what’s real or, more importantly, that the line between real and unreal is either blurred or no longer exists.

Already, this content is appearing in Internet search and informing not only what people think but the models on which future LLMs train. The job market for poets has gone from utterly horrible to something worse.

The market for those historic poets is also going to change.

I’m just waiting for someone to find a previously unknown Shakespeare play that passes every conceivable litmus test and is accepted as real. New readers will find it easier to read than his other works and it will change how experts understand his evolution as a playwright. While we have and should always reinterpret history, when the basic facts of past events are fluid, we have little to, well, stand on.

While Shakespeare will still “say” things to us from across the ages, we’ll be listening to an AI.

Once AI Controls the Past…

We talk about what AI might do to the future without noting that it’s already taking control of our past.

New Scientist reports that about 1 in 20 Wikipedia pages contain AI-written content. The number is probably much higher, considering how long LLMs have been scraping and preserving online content without discriminating its human or machine origins.

Those LLMs then use Wikipedia to further train their models.

There’s no need to worry, according to an expert on Wikipedia quoted in the New Scientist article, as the site’s tradition of human-centric editing and monitoring will devise new ways to detect AI-generated content and at least ensure its accuracy.

They will fail or, more likely, the battle has already been lost.

We have been conditioned to prefer information that’s vetted anonymously and/or broadly, especially when it comes to history. Groups of people who once controlled analyses and presentation of the past were inherently biased, or so the so-called logic goes, and their expertise blinded them to details and conclusions that we expected to see in their work.

History wasn’t customer friendly.

So, the Internet gave us access to an amorphous crowd that would somehow aggregate and organize stuff and, if we didn’t like what they presented, gave us the capacity to add our two cents to the conversation.

Wikipedia formalized this theology into a service that draws on a million users as editors, though little more than a tenth of them make regular contributions (according to Wikipedia). Most studies suggest that their work is pretty accurate, at least no less so than the output from the experts they displaced.

But isn’t that because they rely on that output in the first place?

Wiki’s innovation, like file sharing in the 90s, is less about substance than presentation and availability.

Now, consider that AI is being used to generate the content on which Wiki’s lay editors use to form their opinions, even providing them with attribution to sources that appear reputable. Imagine those AI insights aren’t always right, or completely so, but since they can get generated and shared by machines much more quickly than humans could do it, the content seems broadly consistent.  

Imagine that this propagation of content is purposefully false, so that a preponderance of sites emerge that claim the Holocaust didn’t happen, or that sugar is good for you. It won’t matter whether humans or machines are behind such campaigns, because there’s no way we’ll ever know.

And then meet the new Wikibot that passes the Turing Test with ease and makes changes to the relevant Wiki pages (ChatGPT probably passed it last year). Other Wikibots concur while the remaining human editors don’t see reason to disagree, either with the secretly artificial editors or their cited sources.

None of this requires a big invention or leap of faith.

AI already controls our access to information online, as everything we’re shown on our phones and computer screens is curated by algorithms intended to nudge us toward a particular opinion, purchase decision and, most of all, a need to return to said screens for subsequent direction.

I wonder how much of our history, especially the most recent years in which we’ve begun to transition to a society in which our intelligence and agency are outsourced to AI, will survive over time.

Will AI-written history mention that we human beings weren’t necessarily happy losing our jobs to machines? Will it describe how AI destroyed entire industries while enriching a very few beyond belief?

Will we be able to turn to it for remembrance of all the things we lost when it took over our lives, or just find happyspeak entries that repeat the sales pitches of benefits from its promoters?

In 1984, George Orwell wrote:

“Who controls the past controls the future. Who controls the present controls the past.”

I wonder if AI will make that quote searchable 20 years from now.

Pooh-Pooh AI At Our Own Risk

AI pioneer Yann LeCun says that fears of the existential peril of AI are “complete B.S.” and that, in so many words, we’d be fools not to pursue its development.

The Wall Street Journal story is hidden behind a paywall but the headline says it all: “This AI Pioneer Thinks AI Is Dumber Than a Cat.”

We cat owners are not comforted by his assessment., since we know that “dumb” felines are capable of great mischief, evidence mood shifts that would put any human being under lock and key, and can be downright cruel and deadly to other living things (especially those smaller than them).

They’re proof that doing harm requires no superior intellect. If AIs are like cats, every chatbot and smart system or device should be shut down immediately. The risks are too immense to ignore.

But I digress.

LeCun’s point is that we shouldn’t feel threatened by AI, a position that hasn’t changed since this story ran on the BBC website over a year ago.

Only then he said AI would only get as smart as a rat.

What a difference a year makes.

His POV is still a nice aggregation of all the Pollyanna crap that we get from the investors and developers who hope to make billions from selling AI, or from the academics they fund to legitimize their mad intentions.

[NOTE: LeCun is Chief Data Scientist at Meta, which is one of the big tech companies vying to get more powerful AI to the marketplace faster].

Worried that AI will destroy the world? Tsk-tsk, according to the BBC story, it’ll never come up with a reason to do it and we’ll always be able to turn it off.

Scared that AI will get smarter than us? That’s the point, you dolt, and we’ll use it to solve all our problems. Fearful that it’ll take everyone’s jobs? Naw, it’ll inaugurate a new era of job creation that we can’t even imagine.

We have nothing to fear other than the unknown and, as with the development of any other technology, we’ll devise ways to manage it safely once we know what AI can do.

He pooh-poohs AI at our own risk. None of what he says is true, or the whole truth.

He rightly says that we can’t predict what AI will do, but that means we may be surprised by those capabilities and might not have the time or ability to contain them. Presuming that we’ll always possess an off switch is a canard, since it assumes that we’ll know when it’s time to turn something off, or that the problem(s) we hope to avert are the product of a particular device or system.

It also presumes that someone other than a techno-optimist like LeCun won’t be the one with his or her hand on the switch.

The promise that AI will somehow solve all our “big” problems assumes that our problems are technical or even solvable, but they’re not. Whether global or personal, we lack the political, economic, and individual willpower to make the lifestyle changes that we already know are necessary to combat global warming or the incidences of cancer.

Relying on technologies to solve our problems means relying on technologists to define them in the first place, which is a dicey proposition. Remember that social media was supposed to “fix” our problems collaborating and speaking in the public square, and it instead gave us lives spent in suspicious and angry isolation.

The supercomputers in Colossus: The Forbin Project decide to “fix” the problem of the Cold War by taking away control of government from humans.

And it’s silly to suggest that new jobs will magically appear for the people who lose their jobs to AI, since we already know LeCun believes it’s impossible to predict what AI can and won’t do. What if no jobs appear, or it takes generations for them to do so, which is what happened to knitting jobs displaced by looms during the Industrial Revolution? What happens to those ex-workers in the meantime?

What if the jobs that we can’t imagine now turn out to stink compared to the old ones, or simply pay less? Again, this is what happened to many workers during the Industrial Revolution, especially women.

What if those unemployed workers are unqualified for those new jobs, or don’t live anywhere near them? Are we to assume to the governments and individuals that have proven incapable and/or unwilling to do things about our problems today will somehow grow the backbones to do things about them in the future?

And, finally, AI isn’t just another technology tool, it gets smarter and more aware over time, depending on the propensities and wherewithal of its developers and the data available in the environments of its applications.

Good luck as a human worker hoping to stay ahead of AIs’ capabilities. The jobs upheaval will be a never-ending race which people will only win temporarily, if ever.

Saying otherwise isn’t just a hopeful misstatement, it’s a lie.

But it’s what the AI toffs are telling us so that they can pursue their innovation and profit-making fantasies without the encumbrance of legal or moral guidelines.

They pooh-pooh AI at our own risk.