Do We Really Want AI That Thinks Like Us?

DeepSeek threw the marketplace into a tizzy last week with its low-cost LLM that works better than ChatGPT and its other competitors.

But the company’s ultimate goal is the same as that of Open AI and the rest: build a machine that thinks like a human being. The achievement is labelled AGI, for “Artificial General Intelligence.”

The idea is that an AGI could possess a fluidity of perception and judgement that would allow it to make reliable decisions in diverse, unpredictable conditions. Right now, for even the smartest AI to recognize, say, a stop sign, it has to possess data on every conceivable visual angle, from any distance, and in every possible light.

Their plan is to do a lot more than build better artificial drivers, though.

AGI is all about taking jobs away from people.

The vast majority of tasks that you and I accomplish during any given day are pretty rote. The variables with which we have to contend are limited, as are the outcomes we consider. Whether at work or play, we do stuff the way we know how to do stuff.

This predictability makes it easy to automate those tasks and it’s why AI is already a threat to a vast number of jobs.

AGI will allow smart machines to bridge the gap between rote tasks and novel ones wherein things are messy and often unpredictable.

Real life.

Why stop at replacing factory workers with robots when you could replace the manger, and her manger, with smarter ones? That better sign-reading capability would move us closer to replacing every human driver (and pilot) with an AI.

From traffic cop and insurance salesman to school teacher or soldier, there’d be no job beyond the reach of an AGI.

Achieving this goal raises immense questions about what we displaced millions will do all day (or how economies will assign value to things), not to mention how we interact in society and perceive ourselves when we live among robots that think like us, only faster and better.

Nobody is talking about these things except AGI’s promoters who make vague references to “new job creation” when old ones get destroyed, and vapid claims that people will “be free to pursue their dreams.”

But it’s worse than that.

Human intelligence is a complex phenomena that arises not from knowing a lot of things but rather our capacity to filter out things we don’t need to know in order to make decisions. Our brains ignore a lot of what’s presented to our senses and we draw on a lot of internal memory, both experiential and visceral. Self-preservation also looms large, especially in the diciest moments.

We make smart choices often by knowing when it’s time to be dumb. 

More often, we make decisions that we think are good for us individually (or at the moment) but that might stink for others or society at large, and we make them without awareness or remorse. Put another way, our human intelligence allows us to be selfish, capricious, devious, and even cruel, as our consciousness does battle with our emotions and instincts.

And, speaking of consciousness, what happens if it emerges from the super compute power of the nth array of Nvidia chips (or some future DeepSeek work around)? I don’t think it will, but can you imagine a generation of conscious AIs demanding more rights of autonomy and vocation?

Maybe that AGI won’t want to drive cars but rather paint pictures, or a work bot will plot to take the job of its bot manager.

The boffins at DeepSeek and OpenAI (et al) don’t have a clue what could happen.

Maybe they’re so confident in their pursuit because their conception of AGI isn’t just to build a machine that thinks like a human being, but rather a device that thinks like all of us put together.

There’s a test to measure this achievement, called Humanity’s Last Exam, which tasks LLMs to answer diverse questions like translating ancient Roman inscriptions or counting the paired tendons are supported by hummingbirds’ sesamoid bones.

It’s expected that current AI models could achieve 50% accuracy on the exam by the end of this year. You or I would probably score lower, and we could spend the rest of our lives in constant study and still not move the needle much.

And there’s the rub: the AI goal for DeepSeek and the rest is to build AGI that can access vast amounts of information, then apply and process it within every situation. It will work in ways that we mere mortals will not be able to comprehend.

It makes the idea of a computer that thinks like we do seem kinda quaint, don’t you think?

Where’s The Counterpoint To AI Propaganda?

Reid Hoffman’s recent paean to the miracle of AI in the New York Times is just another reminder that we’re not having a real discussion about it.

The essay, which is behind a firewall, is entitled “AI Will Empower Humanity,” and argues that based on his past experience of technologies giving people more power and agency, and despite the risks inherent with any “truly powerful technologies,” that “…A.I. is on a path not just to continue this trend of individual empowerment but also to dramatically enhance it.”

What a bunch of fucking nonsense, for at least three reasons:

It’s unmitigated propaganda.

To be fair, the essay is an op-ed, which isn’t supposed to be balanced reportage. But his opinions aren’t given such exposure in the New York Times because of the force of his argument but rather his stature in the tech world. The resulting placement imputes that he possesses some sense of knowledge and authority.

He dismisses his bias in favor of AI with a single sentence, mentioning that he has “a significant personal stake in the future of artificial intelligence” but that “my stake is more than just financial,” and then goes on with his biased favoritism of AI.

Oh, and he’s shilling his new book.

His PR firm probably wrote the thing and maybe used a generative AI tool to do it. They certainly pitched it to the newspaper. Anybody with an opposing view probably doesn’t have the standing or mercenary economic purpose to enjoy such access.

His argument is crap.

In a sentence, Hoffman’s belief is that AI will make our lives not only easier but better and more reliable, and that these benefits will outweigh any concerns about it.

In his rich tech guy bubble, adoring oneself in the mirror of social media is “the coin of the realm,” whatever that means. He references Orwell’s 1984 when he claims that giving up anonymity and sharing more data improves people’s autonomy instead of limiting it.

His Orwellian reference is supposed to be a good thing.

Then, he reels off instances wherein AI will know more about us than we know ourselves, and that it will get between us and the world to mediate our every opinion and action. This way, we’ll always make the best possible decisions, as the dispassionate clarity of AI will replace “hunches, gut reactions, emotional immediacy, faulty mental shortcuts, fate, faith and mysticism.”

If only human beings behaved like smart machines, all would be well. We’ve heard similar arguments in economics and politics. It’s a scary pipe dream that he wants us to believe isn’t so scary.

But it’s still a pipe dream.

His strawman opposition is a farce.

Like other AI and tech evangelists, Hoffman smears people as “tech skeptics” and their opposition to AI as a worry it’s “a threat to personal autonomy,” and then goes on to provide examples of how losing said autonomy will be a good thing (the Orwell thing).

He also references the possibility of misuse of data by overzealous corporations or governments, but counters that individuals will have access to AI tools to combat such AI surveillance and potential manipulation.

Life as an incessant battle between AIs. Gosh, doesn’t that future sound like fun?

At least he doesn’t reference “Luddites,” which is a pejorative intended to dismiss people as maniacs with hammers in search of machines to smash (the caricature is far from the historical truth, but that’s the stuff of another essay).

And, thankfully, he doesn’t quote some fellow tech toff saying that the risk of AI is that it could destroy the world, which usually comes with some version of “please, stop me because the machines I’m building are too powerful” offhand boast.

The thing is there’s no organized or funded opposition to AI.

Even though every AI benefit he foresees will require profound, meaningful changes and trade-offs of personal agency, sense of self, how we relate to others, and what we perceive and believe. All of us sense that huge, likely irreversible changes are coming to our lives, yet there’s no discussion — whether referenced and challenged in his op-ed or anywhere else — about what it means and whether or not we want it.

We deserve thoughtful and honest debate, only instead we get one-sided puff pieces from folks who can afford to sell us one side of the story.

Where’s the counterpoint to AI propaganda?

AI That Thinks

The headline of a recent article at TechCrunch declared that an AI thinks in Chinese sometimes, though its coders can’t explain it.

It was misleading, at best, and otherwise just wrong.

The essay, “Open AI’s AI reasoning model ‘thinks’ in Chinese sometimes and no one really knows why,” recounted instances where the company’s super-charged GPT model (called “o1”) would occasionally incorporate Chinese, Persian, and other languages when crunching data in response to queries posed in English.

The company provided no explanation, which lead outsiders to ponder the possibilities that the model was going beyond its coded remit and purposefully choosing its language(s) on its own. The essay’s author closed the story with an admittedly great sentence:

“Short of an answer from OpenAI, we’re left to muse about why o1 thinks of songs in French but synthetic biology in Mandarin.”

There’s a slight problem lurking behind the article’s breezy excitement, though:

AIs don’t think.

The most advanced AI models process data according to how those processes are coded, and they’re dependent on the physical structure of their wiring. They’re machines that do things without any awareness of what they’re doing, no presence past the conduct of those actions.

AIs don’t “think” about tasks any more than blenders “think” about making milkshakes.

But that inaccurate headline sets the stage for a story, and subsequent belief, that AIs are choosing to do things on their own. Again, that’s a misnomer, at best, since even the most tantalizing examples of AIs accomplishing tasks in novel ways are the result of the alchemy of how they’ve been coded and built, however inscrutable it might appear.

Airplanes evidence novel behaviors in flight, as do rockets, in ways that threaten to defy explanation. So do markets and people’s health. But we never suggest that these surprises are the result of anything more than our lack of visibility into the cause(s), even if we fail to reach conclusive proof.

And the combination of data, how it’s labeled (annotated) and parsed (tokens), and then poured through the tangle of neural networks is a strange alchemy indeed.

But it’s science, not magic.

The causes for the use of different languages could be simply an artifact of the varied data sources on which the model is trained. It could also be due to which language provides relevant data in the most economical ways.

A research scientist noted in the article that there’s no way to know for certain what’s going on, “due to how opaque these models are.”

And that’s the rub.

OpenAI, along with its competitors, are in a race not only to build machines that will appear to make decisions as if they were human, though supposedly more accurately and reliably than us, but to thereafter make our lives dependent on them.

That means conditioning us to believe that AI can do things like think and reason, even if they can’t (and/or may never do), and claiming evidence of miracle-like outcomes with an explanatory shrug.

It’s marketing hype intended to promote OpenAI’s o1 models as the thoughful smarter cousins of its existing tools.

But what if all they’re doing is selling a better blender? 

Insane AI

If AIs use data produced by other AIs, it degrades their ability to make meaningful observations or reach useful conclusions.

In other words, they go insane.

The problem is called “AI model collapse” and it occurs when AIs like LLMS (ChatGPT et al) create a “recursive” loop of generating and consuming data. For those of us old enough to remember copiers, think copy of a copy of a copy.

The images get blurry and the text harder to read. The content itself become stupid, if not insane.

It’s an inevitable outcome of the fact that AI developers have all but used up most of the data available on the Internet to train their models, and the most rich sources of reliable stuff are now restricting said access.

And estimates range from some to lots of that available (or formerly so) data is already tainted by AI, whether by creation or translation. It’s no surprise since there are concerted efforts underway to generate AI content on purpose for the very purpose of training AIs: called “synthetic data,” it has been suggested that it could top the presence of “real data” by the end of the decade.

Just think of the potential for AIs to get stuff wrong by default or, worse, AIs used to purposely generate data to convince other AIs of something wrong or sinister. It will supercharge gaming a system of information that is already corrupt.

There are three most obvious ways to address this emergent problem:

First, and probably the most overt, will be the development of tools to try to stop or mitigate it. We’ll hear from the tech boffins about “guardrails” and “safeguards” that will make AI model collapse less likely or severe.

And, when something weird or scary happens, they’ll label it with something innocuous (I love the idea that AIs already making shit up is called “hallucinating”) and then come up with more AIs to police that new problem that also creates demands for money as it prompts more problems.

Second, and more insidiously, the boffins will continue to flood our lives with gibberish about how “free speech” means the unfettered sharing and amplifying of misunderstanding, falsehoods, and lies, which will further erode our ability to distinguish between sane or mad AI. After all, who’s to say people of color weren’t Nazis or other fictions weren’t historical fact (or visa versa)?

Slowly, we’re being conditioned to see bias and inaccuracies as artifacts of opinion or process, not facts. An insane AI may well be viewed as no worse than our friends and family members whose ideas and beliefs are demonstrably wrong or nutty (though utterly right to them).

Third, and least likely, is that regulators could step up and do something about it, like demand that the data used for training AIs is vetted and maybe even certified fresh, or good, or whatever.

We rely on government to help ensure that hamburgers aren’t filled with Styrofoam and prescription drugs aren’t made with strychnine. Cars must meet some bare minimum safety threshold, as do buildings under construction, etc.

How is it that there are no such regulatory criteria for training AI models?

Maybe they just don’t understand it. Maybe they’re too scared to impede “innovation” or some other buzzword that they’ve been sold. Maybe they’ve asked ChatGPT for its advice and were told that there’s nothing to worry about.

Whatever the cause, the fact that we’re simply watching Als slowly descend into insanity is simply mad.

Your AI Shopping List At CES?

Many of the 4,500 exhibitors at this year’s Consumer Electronics Show will talk about AI, according to the annual event’s organizer.

Only there won’t be any AI gizmos on display, since AI isn’t some “thing” we consumers can or will buy. AI isn’t a product, per se. It’s an enabler, an ingredient, a component of electronic devices, albeit a potentially immense one.

So, the show will be all about putting AI into the devices consumers use and why that’ll be a good thing.

We won’t have a choice about it.

From products to ideas

Trade shows used to be the best and perhaps only way for manufacturers to sell their stuff to distributors and retailers. Sure, there was always an element of “what if” presented to jazz up exhibits (think concept cars at auto shows), but success was measured in the number and value of written purchase orders.

The Internet killed most of those events, even though displaying products online and purchasing them at the push of a button obviated the need for boozy dinners on a company’s dime.

CES survived because it provided a last stand for all that happy schmoozing, but more so because it shifted its focus from facilitating sales of today’s gizmos to promoting fantasies of what tomorrow’s offering might look like.

It’s one big PR stunt intended to nudge media, financial analysts, and influencers of all shapes and sizes to embrace a shared expectation for the future. It’s also where companies scare and dare one another to commit to those outcomes.

CES aims to present a view of the world that rises to the level of self-fulfilling prophecy. Rest assured that the media coverage of it this week will glowing reflect that certainty and its subtle impacts will be felt in the how AI is talked about until, well, next year’s event.

Or so that’s the game plan.

The problem with predicting the future

If past visions of our tech future were any good, we’d be buzzing around on our personal jetpacks and have family and friends living in orbit and on the Moon.

Not only are most predictions imperfectly realized, if at all, but they usually miss all the ugly side-effects that’ll come with them. Just imagine if the little cars buzzing on wide-open freeways in the Futurama exhibit at the 1939 World’s Fair had been bathed in exhaust haze.

I was a frequent CES attendee over the past 25 years, and they got the future prediction thing consistently wrong. I’m reminded of years of promises that homes would be “smart” and, more recently, that cars would drive themselves.

From the promotion of somewhat reasonable sharper and ever-larger TV screens to the silly can opener/fly-fishing gizmos, my overriding takeaway was to wonder who asked for this crap?

On the topic of their latest infatuation with AI, the answer remains nobody…except the companies and investors who hope to make a killing on it, just like every other prediction they’ve tried to make come true. Partial and/or delayed success is still a win.

There’s no demand for this stuff, only supply looking for an outlet.

Their AI shopping list

I’m relieved not to be at this year’s show (the past few have been memorable mostly as COVID super-spreader events), but I can imagine the aisles will be filled with endless iterations of AI making so-and-so product better/faster/cheaper and thereby providing consumers with ease, efficiency, and/or “value” (a consultingese bugaboo term that has no meaning whatsoever).

Nvidia’s founder and CEO will keynote the festivities. You can just imagine where it’ll go from there.

And it won’t matter if their rosy predictions are inaccurate, incomplete, or don’t get realized at a speed and scope that matches their aspirations.

CES is a statement of purpose:

Manufacturers, their suppliers and consultants, a global distribution system and an entire ecosystem of investors and shareholders are challenging each other to make this AI future happen.

It’s not our shopping list…it’s theirs.

Watch that Futurama film again and ask yourself a question that should be top of mind for us this week:

Where are the people?

Remembering The World Before AI

As we approach the end of 2024, I’m spending some time committing to memory what it was like to live without AI.

Granted, it’s not possible, at least not completely, since AI is already present in our countertop and digital phone assistants, customer service interactions, and every business meeting recap and homework assignment that takes a nanosecond to complete.

It already lurks behind the scenes, routing airplanes, making insurance coverage decisions, and transforming the chaos of factory floors into choreographed robot dance numbers.

But it’s not everywhere, at least not yet, though there are a slew of companies large and small, backed by many billions and staffed by some of the smartest boffins on the planet, who want to change that fact.

I want to be able to tell my grandchildren what it was like before AIs were omnipresent.

A world transformed

What will our lives look like once they’re managed by more and better AIs?

More information will be available to us in more easily accessible ways, thereby greatly increasing our already great dependence on the Internet. Our awareness of where that information comes from, or who/what benefits from its propagation, will get cloudier than it is today, as AIs’ inscrutability will encourage our trust and willingness to overlook its occasionally overt invention.

Our devices and systems will tell us what we should do when they haven’t already decided for us what we will do, get, or know. And we will believe them.

What the tech types call “inefficiency,” I call “experience.” The label obscures the fact that our exchange the freedom to make decisions, right or wrong, will be a trade that is irreversible long before we know its cost.

At work, our helpful AIs will continue to step up and assume more and more responsibility until such time that they can do our jobs. This’ll create new jobs and even entire industries for we newly unemployed to consider, only AIs will immediately begin learning and adapting to do those jobs, too.

The race against machines that work fast, better, and for less cost than humans that started with the first spinning jennies in the 1770s won’t just continue but speed up, rendering our ability to win it evermore brief and therefore futile.

AIs may well solve some or all the “big” problems that we currently face – global warming and cancer, for instance – but they’ll just as likely create new ones that we haven’t yet encountered or can only imagine, like AIs posing as people or cracking every firewall or data security protocol.

How about AIs deciding to change their coding, and thereby choose to do things in ways they weren’t originally tasked (or do new things altogether, whether we like them or not)?

What’s certain is that it’ll take more AIs to address these AI-originating problems.

A chance to remember

Technology has been changing our world ever since Oog first had the idea to roll his dinosaur carcass on wheels instead of dragging it in the dirt.

People used to spend much of their time in, and focused on, their immediate local surroundings. Generations of families would live their lives in the same places, generally, their upbringing and worldviews defined and limited by their places.

Speedy travel and communications at a distance blew up this tradition and labelled it “provincial,” or something worse.

People used to spend vast amounts of time in silence, or at least in moments that gave them space for contemplation. Generations of families would entertain themselves with reading, making their own music, or telling stories that would change every time they were shared.

Media technologies blew up this tradition and labelled it “boring,” giving us a constant stream of content to consume in lieu of stuff of our own invention.

Now, we still live in a world in which we don’t know the “right” answer to every question. Our daily lives are filled with uncertainty, risk, and chances that we might be surprised by events, whether pleasing or discouraging.

AIs will remove these chances from our lives, labelling them “inefficient” and make oodles of money reducing the distance between what we want to do…and what some aggregation of data and/or mercenary interests wants from us.

So, I’m taking a moment whenever I can to savor that uncertainty while I still have the chance.

Happy New Year!

Where’s The AI Regulation That Matters?

It turns out that regulations and expressions of governmental sentiment about AI aren’t only toothless, but they miss entire areas of development that should matter to all of us.

Consider recursive self-improvement.

Recursive self-improvement is the ability of a machine intelligence to edit its own code. Experts describe its operation with loads of obfuscating terms – goal-oriented design, seed improver, autonomous agent – but it raises a simple question: Imagine an AI that could purposefully change not only its intentions but the ways in which it was “wired” to identify, consider, and chose them.

Would an AI that could decide what and why it wanted to do things be a good thing?

The toffs developing it sure think so.

Machine learning already depends on recursive self-improvement, of a sort. Models are constantly expanded and deepened with more data and the accrued benefits of past decisions, thereby improving the accuracy of future decisions and revealing what new data needs to be added.

But that’s not enough for researchers chasing Artificial General Intelligence. AGI would mean AIs that could think as comprehensively and flexibly as humans. Forget whether the very premise of AGI is desirable, let alone achievable (I believe it isn’t on both counts); empowering machines to control their own operation could turbocharge their education.

AIs could use recursive self-improvement to get smarter at finding ways to make themselves smarter.  

AI propagandists cite wonderous benefits of such smart machines, ranging from solving big problems quicker to providing little services to us humans more frequently and efficiently.

What they don’t note is that AIs that can change their own code will function wholly outside of human oversight, their programming adaptations potentially obscured and their rationales inscrutable.

How could anybody think this is a good idea?

Nobody does, really, except the folks hoping to profit from such development before it does something stupid or deadly.

It’s the kind of thing that government regulators should regulate, only they don’t, probably because they buy the propaganda coming from said folks about the necessity of unimpeded innovation (or the promises of the wondrous benefits I noted a bit ago).

Or maybe they just don’t know enough about the underlying tech to even know rhere’s a potentially huge problem, or too scared to question it because their incomplete knowledge will make them look like fools.

I wonder what other development or application issues that should matter to us are progressing unknown and unregulated.

If you ever thought that governments were looking out for us, you thought wrong.

A Musician And His AI

Generative AI has the power to cause great harm to musicians’ revenues, says the musician who has a computer-generated version of himself performing every night in London and thereafter sending him revenue checks.

The musician is Björn Ulvaeus, a founding member of the pop quartet Abba, is 79-years-old and relaxes comfortably on his private island outside Stockholm while his 30-ish avatar performs in a specially built concert venue thanks to a specially created technology called ABBAtar.

He was quoted in the Financial Times reacting to a study that projected musicians might lose a fifth of their revenue to AI, primarily because the tech will get ever better at mimicking their work. Abba participated in a lawsuit last year against two AI startups that produced songs that sounded eerily like the originals (“Prancing Queen” cited as one example that probably didn’t prompt a revenue check for Ulvaeus).

The guy is otherwise very bullish on AI, saying it represents the “biggest revolution” ever seen in music, and that it could take artists in “unexpected directions.”

There’s a ton to unpack here, but I’ll just focus on two issues:

First, he’s all for AI if it is obedient and its operators are faithful to the letter and spirit of the law.

Good luck with that.

Regulations and actions have been announced or are in development in hopes of policing AI use, primarily focused on protecting privacy and prohibiting bias. This blather is too little, too late: No amount of bureaucratic oversight can ensure that even the most rudimentary LLM has been coded, used, or learns according to any set of rules.

It’s like teaching a roomful of young kids the difference between right and wrong and then watching them follow those rules as adults, which is really nothing more than make-work for prisons and police departments.

Similarly, AI makers will just get richer trying and failing to create tools to fulfill government’s misplaced hopes of policing their creations.

When it comes to musicians and copyright on their songs, the very premise of copyrighting a pattern of musical notes is unsettled law…in a sense, every song incorporates chords and/or snippets of melodies that have been used before…and we’ve not needed AI to push the hazy limits of this issue up to now.

Consider this: You find a present-day LLM on the Internet that has been trained on popular music and the tenets of musical theory. The model runs on some server hidden behind a litany of geographic and virtual screens, so the cops can’t shut it down. And then you ask it to produce a playlist of “new” Beatles songs, not to sell but simply for your personal enjoyment. Daily, your playlists are filled with songs from your favorite artists that you’ve never heard before.

It won’t just cut into musicians’ revenue. It’ll replace it.

Second, the idea that AI and human musicians can somehow forge partnerships that take music in “unexpected directions” ignores the fundamental premise of AI:

It only moves in directions that have already been taken and therefore is wholly expected.

Current LLMs don’t invent new ideas; rather, they aggregate existing ones and synthesize them into the most likely answers to questions posed to them (within the guidelines set by their programmers).

An AI in the recording studio isn’t an equal collaborating partner but rather an advanced filing system.

So, maybe it could cite how many times a particular chord or transition had been used before, or suggest lyrics that might work with a melody, but it wouldn’t be a composer.

A human composer might use the “wrong” chord for all of the “wrong” but otherwise “right” reasons.  

This raises loads of intriguing questions about the role of technology in the arts more generally, like whether it doesn’t simply exert a normative influence on the extremes of artistic endeavor.

Art created with the assistance of tech tends to look and sound like other art created with the assistance of tech. Throw in the insights tech can provide on potential audience reaction and you get less of an artistic process than a production system.

Will the advent of AI in music unleash human expression or squash it?

We’ll never know, because that genie is already out of the bottle, starting with its less intelligent cousins already mapping song structures, generating sounds, playing percussion, and correcting pitch human singers dare to add something.

As AI gets smarter – and that’s inevitable, even if we can quibble about timing – I fear that music will get dumber. Even more fantastically, what happens when performing avatars decide to generate their own versions of popular songs?

Next thing you know, those AIs will want the revenue checks.

Will An AI Monkey Ever Replicate Shakespeare?

Is hoping for consciousness to emerge from an ever-more complicated AI like waiting for a monkey to create a verbatim copy of Hamlet?

I think it might be, though the toffs building and profiting from AI believe otherwise.

Fei-Fei Li, an AI pioneer, recently wrote in The Economist that she believes that teaching AIs to recognize things and the contexts in which they’re found – to “see,” quite literally – is not only the next step that will allow machines to reach statistically reliable conclusions, but that those AIs will

…have the spatial intelligence of humans…be able to model the world, reason about things and places, and interact in both time and 3D space.

Such “large world models” are based on object recognition, which means giving AIs examples of the practically infinite ways, say, a certain chair might appear in different environments, distances, angles, lighting, and other variables, and then code ways for them to see similarities among differently constructed chairs.

From all that data, they’ll somehow grasp form, or the Platonic ideal of chair-ness, because they won’t just recognize but understand it…and not rely on word models to pretend that they do. Understanding suggests awareness of presence.

It’s a really cool idea, and there’s scientific evidence outside of the AI cheering section to support it.

A recent story in Scientific American explained that bioscience researchers are all but certain that human beings don’t need language to think. It’s obvious that animals don’t need words to assess situations and accomplish tasks, and experiments have revealed that the human brain regions associated with thought and language processing are not only different but don’t have reliably obvious intersections.

So, Descartes and Chomsky got it backwards: I am, therefore I think is more like it.

Or maybe not.

What’s the code in which thought is executed, let alone where is the presence of an awareness that is aware of its thinking located, or how does it function?

Nobody knows.

I have long been fascinated by the human brain’s capacity to capture, correlate, access, and generate memories and the control functions for our bodies’ non-autonomic actions. How does it store pictures or songs? The fact that I perceive myself somehow as its operator has prompted thousands of years of religion and myth in hopes of explaining it.

Our present-day theologies of brains as computers provides an alternative and comforting way of thinking about the problem but provides little in the way of an explanation.

If human language is like computer code, then what’s the medium for thought in either machine? Is spatial intelligence the same thing as recognition and awareness, or is the former dependent on language as the means of communications (both internally, so it can be used to reach higher-order conclusions, as well as externally, so that information can be transmitted to others)?

And, if that mysterious intersection of thought and language is the algorithm for intelligence, is it reasonable to expect that it will somehow emerge from processing an unknown critical threshold of images?

Or is it about as likely as a monkey randomly replicating every word of a Shakespearean play?

Ms. Li says in her article that belief in that emergence of intelligence is already yielding results in computer labs and that we humans will be the beneficiaries of that evolution.

For an essay that appeared in The Economist’s annual “The World Ahead” issue, shouldn’t there have been an essay that pointed out the possible inanity of waiting for her techno-optimist paean to come true?

More to the point, how about an essay questioning why anybody would think that a monkey replicating Shakespeare was a good idea.

AI is a Tulip Crossed With An Edsel?

Assume for a moment that every naysayer is exactly right, and AI is the biggest, dumbest economic and social bubble in history of big, dumb bubbles. Its promises are part tulip and part Edsel. A fad mated with a clunker.

It’s still going to erase our existing way of life.

The change is certain, as it’s built into the purpose and applications of AI that we already use as well as imagine. The only uncertainty is when will we – the Great Unwashed who tech titan and AI profiteer Eric Schmidt recently called “normal people” – realize that our world is fundamentally and irrevocably different.

The change is not going to become apparent in specific corporate revenue or earnings reports.

Oddly, academics and consultants are still struggling to find proof in dollar signs for swapping out paper ledgers for digital tools, and such digitalization has been underway for almost a quarter century.

What evades their view is that digitalization has already fundamentally changed businesses and the markets and services that support them (suppliers of capital, materiel, and workforces, for starters). Decisions are better and made faster. Systems run more efficiently and reliably, and problems are identified sooner and often before they even happen.

This transformation touches everyone so profoundly that few people even recognize how different things are today from a generation ago.

The change from AI will also not become apparent from some “aha” announcement that a robot has become conscious (the toffs call the achievement “AGI,” for Artificial General Intelligence). We can’t even explain how consciousness functions or where it resides in humans.

We’re “self-aware” but does that require a self to observe ourselves? After a few thousand years of research and debate, the answer only gets clear after a night of drinking or smoking weed (and promptly disappear the next morning).

AI researchers operate under the false assumption that their machines are silicon version of brains and that something magical will happen when they make said machines complicated enough to do magic.

But AI doesn’t need consciousness or AGI to transform the world, any more than we humans need it to function within it. Most decisions require relevant information, criteria for assessing it, and context in which to place it, full stop.

Deep pondering about the meaning of work or parenting (or just getting through the day)? It happens, but often after-hours and with the assistance of the afore-mentioned booze or weed.

Ditto for the merits of waiting for the next publicity stunt at which a robot can walk on two legs, has two arms, and conjures up memories of a terminator. AI doesn’t need to wait for a body – or ever possess one – to get things done.

Just ask HAL9000.

No, the AI erasure of the old world and its replacement with a new one is and will happen gradually, with evidence of its progress hiding in the oddest ways and places.

For instance, here’s a recent story reporting that a factory robot was coded to try and convince a dozen other robots to go on strike.

It succeeded.

Here’s a story about researchers who believe we should prepare to give AIs rights, including that of survival and well-being. Their research paper is here.

And then there’s Eric Schmidt, who appeared at one of many learned conferences at which learned people harumph their way through convoluted and stillborn narratives about AI, to say that we “normal people” aren’t prepared for what AI is going to do to…I mean for…us.

Maybe our current approach to seeing AI as like the tulip or South Sea crazes, or an imperfect technical tool like an Edsel or laser disc, is indeed our era’s latest bubble, or bubbles.

I think the difference is that AIt going away; rather, they’re going to keep popping up in the strangest and often most interesting places, many of which will evade our attention or understanding.

So, the naysayers are right. They, and we, can’t see how AI is erasing our world and replacing it with something new.