Remembering The World Before AI

As we approach the end of 2024, I’m spending some time committing to memory what it was like to live without AI.

Granted, it’s not possible, at least not completely, since AI is already present in our countertop and digital phone assistants, customer service interactions, and every business meeting recap and homework assignment that takes a nanosecond to complete.

It already lurks behind the scenes, routing airplanes, making insurance coverage decisions, and transforming the chaos of factory floors into choreographed robot dance numbers.

But it’s not everywhere, at least not yet, though there are a slew of companies large and small, backed by many billions and staffed by some of the smartest boffins on the planet, who want to change that fact.

I want to be able to tell my grandchildren what it was like before AIs were omnipresent.

A world transformed

What will our lives look like once they’re managed by more and better AIs?

More information will be available to us in more easily accessible ways, thereby greatly increasing our already great dependence on the Internet. Our awareness of where that information comes from, or who/what benefits from its propagation, will get cloudier than it is today, as AIs’ inscrutability will encourage our trust and willingness to overlook its occasionally overt invention.

Our devices and systems will tell us what we should do when they haven’t already decided for us what we will do, get, or know. And we will believe them.

What the tech types call “inefficiency,” I call “experience.” The label obscures the fact that our exchange the freedom to make decisions, right or wrong, will be a trade that is irreversible long before we know its cost.

At work, our helpful AIs will continue to step up and assume more and more responsibility until such time that they can do our jobs. This’ll create new jobs and even entire industries for we newly unemployed to consider, only AIs will immediately begin learning and adapting to do those jobs, too.

The race against machines that work fast, better, and for less cost than humans that started with the first spinning jennies in the 1770s won’t just continue but speed up, rendering our ability to win it evermore brief and therefore futile.

AIs may well solve some or all the “big” problems that we currently face – global warming and cancer, for instance – but they’ll just as likely create new ones that we haven’t yet encountered or can only imagine, like AIs posing as people or cracking every firewall or data security protocol.

How about AIs deciding to change their coding, and thereby choose to do things in ways they weren’t originally tasked (or do new things altogether, whether we like them or not)?

What’s certain is that it’ll take more AIs to address these AI-originating problems.

A chance to remember

Technology has been changing our world ever since Oog first had the idea to roll his dinosaur carcass on wheels instead of dragging it in the dirt.

People used to spend much of their time in, and focused on, their immediate local surroundings. Generations of families would live their lives in the same places, generally, their upbringing and worldviews defined and limited by their places.

Speedy travel and communications at a distance blew up this tradition and labelled it “provincial,” or something worse.

People used to spend vast amounts of time in silence, or at least in moments that gave them space for contemplation. Generations of families would entertain themselves with reading, making their own music, or telling stories that would change every time they were shared.

Media technologies blew up this tradition and labelled it “boring,” giving us a constant stream of content to consume in lieu of stuff of our own invention.

Now, we still live in a world in which we don’t know the “right” answer to every question. Our daily lives are filled with uncertainty, risk, and chances that we might be surprised by events, whether pleasing or discouraging.

AIs will remove these chances from our lives, labelling them “inefficient” and make oodles of money reducing the distance between what we want to do…and what some aggregation of data and/or mercenary interests wants from us.

So, I’m taking a moment whenever I can to savor that uncertainty while I still have the chance.

Happy New Year!

Where’s The AI Regulation That Matters?

It turns out that regulations and expressions of governmental sentiment about AI aren’t only toothless, but they miss entire areas of development that should matter to all of us.

Consider recursive self-improvement.

Recursive self-improvement is the ability of a machine intelligence to edit its own code. Experts describe its operation with loads of obfuscating terms – goal-oriented design, seed improver, autonomous agent – but it raises a simple question: Imagine an AI that could purposefully change not only its intentions but the ways in which it was “wired” to identify, consider, and chose them.

Would an AI that could decide what and why it wanted to do things be a good thing?

The toffs developing it sure think so.

Machine learning already depends on recursive self-improvement, of a sort. Models are constantly expanded and deepened with more data and the accrued benefits of past decisions, thereby improving the accuracy of future decisions and revealing what new data needs to be added.

But that’s not enough for researchers chasing Artificial General Intelligence. AGI would mean AIs that could think as comprehensively and flexibly as humans. Forget whether the very premise of AGI is desirable, let alone achievable (I believe it isn’t on both counts); empowering machines to control their own operation could turbocharge their education.

AIs could use recursive self-improvement to get smarter at finding ways to make themselves smarter.  

AI propagandists cite wonderous benefits of such smart machines, ranging from solving big problems quicker to providing little services to us humans more frequently and efficiently.

What they don’t note is that AIs that can change their own code will function wholly outside of human oversight, their programming adaptations potentially obscured and their rationales inscrutable.

How could anybody think this is a good idea?

Nobody does, really, except the folks hoping to profit from such development before it does something stupid or deadly.

It’s the kind of thing that government regulators should regulate, only they don’t, probably because they buy the propaganda coming from said folks about the necessity of unimpeded innovation (or the promises of the wondrous benefits I noted a bit ago).

Or maybe they just don’t know enough about the underlying tech to even know rhere’s a potentially huge problem, or too scared to question it because their incomplete knowledge will make them look like fools.

I wonder what other development or application issues that should matter to us are progressing unknown and unregulated.

If you ever thought that governments were looking out for us, you thought wrong.

A Musician And His AI

Generative AI has the power to cause great harm to musicians’ revenues, says the musician who has a computer-generated version of himself performing every night in London and thereafter sending him revenue checks.

The musician is Björn Ulvaeus, a founding member of the pop quartet Abba, is 79-years-old and relaxes comfortably on his private island outside Stockholm while his 30-ish avatar performs in a specially built concert venue thanks to a specially created technology called ABBAtar.

He was quoted in the Financial Times reacting to a study that projected musicians might lose a fifth of their revenue to AI, primarily because the tech will get ever better at mimicking their work. Abba participated in a lawsuit last year against two AI startups that produced songs that sounded eerily like the originals (“Prancing Queen” cited as one example that probably didn’t prompt a revenue check for Ulvaeus).

The guy is otherwise very bullish on AI, saying it represents the “biggest revolution” ever seen in music, and that it could take artists in “unexpected directions.”

There’s a ton to unpack here, but I’ll just focus on two issues:

First, he’s all for AI if it is obedient and its operators are faithful to the letter and spirit of the law.

Good luck with that.

Regulations and actions have been announced or are in development in hopes of policing AI use, primarily focused on protecting privacy and prohibiting bias. This blather is too little, too late: No amount of bureaucratic oversight can ensure that even the most rudimentary LLM has been coded, used, or learns according to any set of rules.

It’s like teaching a roomful of young kids the difference between right and wrong and then watching them follow those rules as adults, which is really nothing more than make-work for prisons and police departments.

Similarly, AI makers will just get richer trying and failing to create tools to fulfill government’s misplaced hopes of policing their creations.

When it comes to musicians and copyright on their songs, the very premise of copyrighting a pattern of musical notes is unsettled law…in a sense, every song incorporates chords and/or snippets of melodies that have been used before…and we’ve not needed AI to push the hazy limits of this issue up to now.

Consider this: You find a present-day LLM on the Internet that has been trained on popular music and the tenets of musical theory. The model runs on some server hidden behind a litany of geographic and virtual screens, so the cops can’t shut it down. And then you ask it to produce a playlist of “new” Beatles songs, not to sell but simply for your personal enjoyment. Daily, your playlists are filled with songs from your favorite artists that you’ve never heard before.

It won’t just cut into musicians’ revenue. It’ll replace it.

Second, the idea that AI and human musicians can somehow forge partnerships that take music in “unexpected directions” ignores the fundamental premise of AI:

It only moves in directions that have already been taken and therefore is wholly expected.

Current LLMs don’t invent new ideas; rather, they aggregate existing ones and synthesize them into the most likely answers to questions posed to them (within the guidelines set by their programmers).

An AI in the recording studio isn’t an equal collaborating partner but rather an advanced filing system.

So, maybe it could cite how many times a particular chord or transition had been used before, or suggest lyrics that might work with a melody, but it wouldn’t be a composer.

A human composer might use the “wrong” chord for all of the “wrong” but otherwise “right” reasons.  

This raises loads of intriguing questions about the role of technology in the arts more generally, like whether it doesn’t simply exert a normative influence on the extremes of artistic endeavor.

Art created with the assistance of tech tends to look and sound like other art created with the assistance of tech. Throw in the insights tech can provide on potential audience reaction and you get less of an artistic process than a production system.

Will the advent of AI in music unleash human expression or squash it?

We’ll never know, because that genie is already out of the bottle, starting with its less intelligent cousins already mapping song structures, generating sounds, playing percussion, and correcting pitch human singers dare to add something.

As AI gets smarter – and that’s inevitable, even if we can quibble about timing – I fear that music will get dumber. Even more fantastically, what happens when performing avatars decide to generate their own versions of popular songs?

Next thing you know, those AIs will want the revenue checks.

Will An AI Monkey Ever Replicate Shakespeare?

Is hoping for consciousness to emerge from an ever-more complicated AI like waiting for a monkey to create a verbatim copy of Hamlet?

I think it might be, though the toffs building and profiting from AI believe otherwise.

Fei-Fei Li, an AI pioneer, recently wrote in The Economist that she believes that teaching AIs to recognize things and the contexts in which they’re found – to “see,” quite literally – is not only the next step that will allow machines to reach statistically reliable conclusions, but that those AIs will

…have the spatial intelligence of humans…be able to model the world, reason about things and places, and interact in both time and 3D space.

Such “large world models” are based on object recognition, which means giving AIs examples of the practically infinite ways, say, a certain chair might appear in different environments, distances, angles, lighting, and other variables, and then code ways for them to see similarities among differently constructed chairs.

From all that data, they’ll somehow grasp form, or the Platonic ideal of chair-ness, because they won’t just recognize but understand it…and not rely on word models to pretend that they do. Understanding suggests awareness of presence.

It’s a really cool idea, and there’s scientific evidence outside of the AI cheering section to support it.

A recent story in Scientific American explained that bioscience researchers are all but certain that human beings don’t need language to think. It’s obvious that animals don’t need words to assess situations and accomplish tasks, and experiments have revealed that the human brain regions associated with thought and language processing are not only different but don’t have reliably obvious intersections.

So, Descartes and Chomsky got it backwards: I am, therefore I think is more like it.

Or maybe not.

What’s the code in which thought is executed, let alone where is the presence of an awareness that is aware of its thinking located, or how does it function?

Nobody knows.

I have long been fascinated by the human brain’s capacity to capture, correlate, access, and generate memories and the control functions for our bodies’ non-autonomic actions. How does it store pictures or songs? The fact that I perceive myself somehow as its operator has prompted thousands of years of religion and myth in hopes of explaining it.

Our present-day theologies of brains as computers provides an alternative and comforting way of thinking about the problem but provides little in the way of an explanation.

If human language is like computer code, then what’s the medium for thought in either machine? Is spatial intelligence the same thing as recognition and awareness, or is the former dependent on language as the means of communications (both internally, so it can be used to reach higher-order conclusions, as well as externally, so that information can be transmitted to others)?

And, if that mysterious intersection of thought and language is the algorithm for intelligence, is it reasonable to expect that it will somehow emerge from processing an unknown critical threshold of images?

Or is it about as likely as a monkey randomly replicating every word of a Shakespearean play?

Ms. Li says in her article that belief in that emergence of intelligence is already yielding results in computer labs and that we humans will be the beneficiaries of that evolution.

For an essay that appeared in The Economist’s annual “The World Ahead” issue, shouldn’t there have been an essay that pointed out the possible inanity of waiting for her techno-optimist paean to come true?

More to the point, how about an essay questioning why anybody would think that a monkey replicating Shakespeare was a good idea.