Will An AI Monkey Ever Replicate Shakespeare?

Is hoping for consciousness to emerge from an ever-more complicated AI like waiting for a monkey to create a verbatim copy of Hamlet?

I think it might be, though the toffs building and profiting from AI believe otherwise.

Fei-Fei Li, an AI pioneer, recently wrote in The Economist that she believes that teaching AIs to recognize things and the contexts in which they’re found – to “see,” quite literally – is not only the next step that will allow machines to reach statistically reliable conclusions, but that those AIs will

…have the spatial intelligence of humans…be able to model the world, reason about things and places, and interact in both time and 3D space.

Such “large world models” are based on object recognition, which means giving AIs examples of the practically infinite ways, say, a certain chair might appear in different environments, distances, angles, lighting, and other variables, and then code ways for them to see similarities among differently constructed chairs.

From all that data, they’ll somehow grasp form, or the Platonic ideal of chair-ness, because they won’t just recognize but understand it…and not rely on word models to pretend that they do. Understanding suggests awareness of presence.

It’s a really cool idea, and there’s scientific evidence outside of the AI cheering section to support it.

A recent story in Scientific American explained that bioscience researchers are all but certain that human beings don’t need language to think. It’s obvious that animals don’t need words to assess situations and accomplish tasks, and experiments have revealed that the human brain regions associated with thought and language processing are not only different but don’t have reliably obvious intersections.

So, Descartes and Chomsky got it backwards: I am, therefore I think is more like it.

Or maybe not.

What’s the code in which thought is executed, let alone where is the presence of an awareness that is aware of its thinking located, or how does it function?

Nobody knows.

I have long been fascinated by the human brain’s capacity to capture, correlate, access, and generate memories and the control functions for our bodies’ non-autonomic actions. How does it store pictures or songs? The fact that I perceive myself somehow as its operator has prompted thousands of years of religion and myth in hopes of explaining it.

Our present-day theologies of brains as computers provides an alternative and comforting way of thinking about the problem but provides little in the way of an explanation.

If human language is like computer code, then what’s the medium for thought in either machine? Is spatial intelligence the same thing as recognition and awareness, or is the former dependent on language as the means of communications (both internally, so it can be used to reach higher-order conclusions, as well as externally, so that information can be transmitted to others)?

And, if that mysterious intersection of thought and language is the algorithm for intelligence, is it reasonable to expect that it will somehow emerge from processing an unknown critical threshold of images?

Or is it about as likely as a monkey randomly replicating every word of a Shakespearean play?

Ms. Li says in her article that belief in that emergence of intelligence is already yielding results in computer labs and that we humans will be the beneficiaries of that evolution.

For an essay that appeared in The Economist’s annual “The World Ahead” issue, shouldn’t there have been an essay that pointed out the possible inanity of waiting for her techno-optimist paean to come true?

More to the point, how about an essay questioning why anybody would think that a monkey replicating Shakespeare was a good idea.

AI is a Tulip Crossed With An Edsel?

Assume for a moment that every naysayer is exactly right, and AI is the biggest, dumbest economic and social bubble in history of big, dumb bubbles. Its promises are part tulip and part Edsel. A fad mated with a clunker.

It’s still going to erase our existing way of life.

The change is certain, as it’s built into the purpose and applications of AI that we already use as well as imagine. The only uncertainty is when will we – the Great Unwashed who tech titan and AI profiteer Eric Schmidt recently called “normal people” – realize that our world is fundamentally and irrevocably different.

The change is not going to become apparent in specific corporate revenue or earnings reports.

Oddly, academics and consultants are still struggling to find proof in dollar signs for swapping out paper ledgers for digital tools, and such digitalization has been underway for almost a quarter century.

What evades their view is that digitalization has already fundamentally changed businesses and the markets and services that support them (suppliers of capital, materiel, and workforces, for starters). Decisions are better and made faster. Systems run more efficiently and reliably, and problems are identified sooner and often before they even happen.

This transformation touches everyone so profoundly that few people even recognize how different things are today from a generation ago.

The change from AI will also not become apparent from some “aha” announcement that a robot has become conscious (the toffs call the achievement “AGI,” for Artificial General Intelligence). We can’t even explain how consciousness functions or where it resides in humans.

We’re “self-aware” but does that require a self to observe ourselves? After a few thousand years of research and debate, the answer only gets clear after a night of drinking or smoking weed (and promptly disappear the next morning).

AI researchers operate under the false assumption that their machines are silicon version of brains and that something magical will happen when they make said machines complicated enough to do magic.

But AI doesn’t need consciousness or AGI to transform the world, any more than we humans need it to function within it. Most decisions require relevant information, criteria for assessing it, and context in which to place it, full stop.

Deep pondering about the meaning of work or parenting (or just getting through the day)? It happens, but often after-hours and with the assistance of the afore-mentioned booze or weed.

Ditto for the merits of waiting for the next publicity stunt at which a robot can walk on two legs, has two arms, and conjures up memories of a terminator. AI doesn’t need to wait for a body – or ever possess one – to get things done.

Just ask HAL9000.

No, the AI erasure of the old world and its replacement with a new one is and will happen gradually, with evidence of its progress hiding in the oddest ways and places.

For instance, here’s a recent story reporting that a factory robot was coded to try and convince a dozen other robots to go on strike.

It succeeded.

Here’s a story about researchers who believe we should prepare to give AIs rights, including that of survival and well-being. Their research paper is here.

And then there’s Eric Schmidt, who appeared at one of many learned conferences at which learned people harumph their way through convoluted and stillborn narratives about AI, to say that we “normal people” aren’t prepared for what AI is going to do to…I mean for…us.

Maybe our current approach to seeing AI as like the tulip or South Sea crazes, or an imperfect technical tool like an Edsel or laser disc, is indeed our era’s latest bubble, or bubbles.

I think the difference is that AIt going away; rather, they’re going to keep popping up in the strangest and often most interesting places, many of which will evade our attention or understanding.

So, the naysayers are right. They, and we, can’t see how AI is erasing our world and replacing it with something new.

AI Moves In On Poetry

Not only can AI write in the style of famous poets, but it can also improve on their work, according to this story.

Researchers at the University of Pittsburg asked ChatGPT-3.5 to create poems that would appear to have come from writers like Chaucer, Shakespeare, Dickinson, and Plath. Then, they shuffled the results with actual poems from those artists and asked non-expert readers – i.e. people who didn’t necessarily read poetry much – which ones were written by humans and which ones they liked.

The results weren’t dramatic, but a trend did emerge: They thought the AI poems had been written by people and they preferred them to the real thing. Here is the research abstract.

The deck was (and will always be) stacked against humans when it comes to competing with AI that can train on what artists say and do.

For starters, AI can understand a methodology, style, and tone better than any person by accessing more direct and related data and compiling processes that explicitly provide guidelines that artists only know implicitly and incompletely.

Shakespeare had no “Writing Shakespeare For Dummies” to follow. He couldn’t mimic himself as well as an AI could even if he tried.

This means also that the AI poems in the study didn’t cover new ground in terms of ideas, per se, but rather presented variations on those themes. It’s as if the Walt Whitman’s poetry was the draft for the AI to refine and present. Doing something better isn’t the same thing as doing something different.

Intentionality also played a part in the research, as we can debate what Allen Ginsberg or Dorothea Lasky wanted to communicate in their poems because, well, that’s because metaphors, analogies, nuances, and sometimes outright cognitive dissonance are legit tools of the trade.

Poems can “say” many things and/or say them in ways that aren’t easily grasped, which is probably why many of the test participants weren’t poetry readers.

The AI used in the research had no such complex relationship with its art or audience; in fact, it possessed a bias toward synthesizing and delivering ideas in ways that made them more understandable.

Think translator more than creator.

AI-generated content that appears to be “in the style of” other content is already a fact of life in news and the arts, copyright lawsuits notwithstanding. But this research shows that there’s truly no way to differentiate it from what’s real or, more importantly, that the line between real and unreal is either blurred or no longer exists.

Already, this content is appearing in Internet search and informing not only what people think but the models on which future LLMs train. The job market for poets has gone from utterly horrible to something worse.

The market for those historic poets is also going to change.

I’m just waiting for someone to find a previously unknown Shakespeare play that passes every conceivable litmus test and is accepted as real. New readers will find it easier to read than his other works and it will change how experts understand his evolution as a playwright. While we have and should always reinterpret history, when the basic facts of past events are fluid, we have little to, well, stand on.

While Shakespeare will still “say” things to us from across the ages, we’ll be listening to an AI.

Once AI Controls the Past…

We talk about what AI might do to the future without noting that it’s already taking control of our past.

New Scientist reports that about 1 in 20 Wikipedia pages contain AI-written content. The number is probably much higher, considering how long LLMs have been scraping and preserving online content without discriminating its human or machine origins.

Those LLMs then use Wikipedia to further train their models.

There’s no need to worry, according to an expert on Wikipedia quoted in the New Scientist article, as the site’s tradition of human-centric editing and monitoring will devise new ways to detect AI-generated content and at least ensure its accuracy.

They will fail or, more likely, the battle has already been lost.

We have been conditioned to prefer information that’s vetted anonymously and/or broadly, especially when it comes to history. Groups of people who once controlled analyses and presentation of the past were inherently biased, or so the so-called logic goes, and their expertise blinded them to details and conclusions that we expected to see in their work.

History wasn’t customer friendly.

So, the Internet gave us access to an amorphous crowd that would somehow aggregate and organize stuff and, if we didn’t like what they presented, gave us the capacity to add our two cents to the conversation.

Wikipedia formalized this theology into a service that draws on a million users as editors, though little more than a tenth of them make regular contributions (according to Wikipedia). Most studies suggest that their work is pretty accurate, at least no less so than the output from the experts they displaced.

But isn’t that because they rely on that output in the first place?

Wiki’s innovation, like file sharing in the 90s, is less about substance than presentation and availability.

Now, consider that AI is being used to generate the content on which Wiki’s lay editors use to form their opinions, even providing them with attribution to sources that appear reputable. Imagine those AI insights aren’t always right, or completely so, but since they can get generated and shared by machines much more quickly than humans could do it, the content seems broadly consistent.  

Imagine that this propagation of content is purposefully false, so that a preponderance of sites emerge that claim the Holocaust didn’t happen, or that sugar is good for you. It won’t matter whether humans or machines are behind such campaigns, because there’s no way we’ll ever know.

And then meet the new Wikibot that passes the Turing Test with ease and makes changes to the relevant Wiki pages (ChatGPT probably passed it last year). Other Wikibots concur while the remaining human editors don’t see reason to disagree, either with the secretly artificial editors or their cited sources.

None of this requires a big invention or leap of faith.

AI already controls our access to information online, as everything we’re shown on our phones and computer screens is curated by algorithms intended to nudge us toward a particular opinion, purchase decision and, most of all, a need to return to said screens for subsequent direction.

I wonder how much of our history, especially the most recent years in which we’ve begun to transition to a society in which our intelligence and agency are outsourced to AI, will survive over time.

Will AI-written history mention that we human beings weren’t necessarily happy losing our jobs to machines? Will it describe how AI destroyed entire industries while enriching a very few beyond belief?

Will we be able to turn to it for remembrance of all the things we lost when it took over our lives, or just find happyspeak entries that repeat the sales pitches of benefits from its promoters?

In 1984, George Orwell wrote:

“Who controls the past controls the future. Who controls the present controls the past.”

I wonder if AI will make that quote searchable 20 years from now.

Pooh-Pooh AI At Our Own Risk

AI pioneer Yann LeCun says that fears of the existential peril of AI are “complete B.S.” and that, in so many words, we’d be fools not to pursue its development.

The Wall Street Journal story is hidden behind a paywall but the headline says it all: “This AI Pioneer Thinks AI Is Dumber Than a Cat.”

We cat owners are not comforted by his assessment., since we know that “dumb” felines are capable of great mischief, evidence mood shifts that would put any human being under lock and key, and can be downright cruel and deadly to other living things (especially those smaller than them).

They’re proof that doing harm requires no superior intellect. If AIs are like cats, every chatbot and smart system or device should be shut down immediately. The risks are too immense to ignore.

But I digress.

LeCun’s point is that we shouldn’t feel threatened by AI, a position that hasn’t changed since this story ran on the BBC website over a year ago.

Only then he said AI would only get as smart as a rat.

What a difference a year makes.

His POV is still a nice aggregation of all the Pollyanna crap that we get from the investors and developers who hope to make billions from selling AI, or from the academics they fund to legitimize their mad intentions.

[NOTE: LeCun is Chief Data Scientist at Meta, which is one of the big tech companies vying to get more powerful AI to the marketplace faster].

Worried that AI will destroy the world? Tsk-tsk, according to the BBC story, it’ll never come up with a reason to do it and we’ll always be able to turn it off.

Scared that AI will get smarter than us? That’s the point, you dolt, and we’ll use it to solve all our problems. Fearful that it’ll take everyone’s jobs? Naw, it’ll inaugurate a new era of job creation that we can’t even imagine.

We have nothing to fear other than the unknown and, as with the development of any other technology, we’ll devise ways to manage it safely once we know what AI can do.

He pooh-poohs AI at our own risk. None of what he says is true, or the whole truth.

He rightly says that we can’t predict what AI will do, but that means we may be surprised by those capabilities and might not have the time or ability to contain them. Presuming that we’ll always possess an off switch is a canard, since it assumes that we’ll know when it’s time to turn something off, or that the problem(s) we hope to avert are the product of a particular device or system.

It also presumes that someone other than a techno-optimist like LeCun won’t be the one with his or her hand on the switch.

The promise that AI will somehow solve all our “big” problems assumes that our problems are technical or even solvable, but they’re not. Whether global or personal, we lack the political, economic, and individual willpower to make the lifestyle changes that we already know are necessary to combat global warming or the incidences of cancer.

Relying on technologies to solve our problems means relying on technologists to define them in the first place, which is a dicey proposition. Remember that social media was supposed to “fix” our problems collaborating and speaking in the public square, and it instead gave us lives spent in suspicious and angry isolation.

The supercomputers in Colossus: The Forbin Project decide to “fix” the problem of the Cold War by taking away control of government from humans.

And it’s silly to suggest that new jobs will magically appear for the people who lose their jobs to AI, since we already know LeCun believes it’s impossible to predict what AI can and won’t do. What if no jobs appear, or it takes generations for them to do so, which is what happened to knitting jobs displaced by looms during the Industrial Revolution? What happens to those ex-workers in the meantime?

What if the jobs that we can’t imagine now turn out to stink compared to the old ones, or simply pay less? Again, this is what happened to many workers during the Industrial Revolution, especially women.

What if those unemployed workers are unqualified for those new jobs, or don’t live anywhere near them? Are we to assume to the governments and individuals that have proven incapable and/or unwilling to do things about our problems today will somehow grow the backbones to do things about them in the future?

And, finally, AI isn’t just another technology tool, it gets smarter and more aware over time, depending on the propensities and wherewithal of its developers and the data available in the environments of its applications.

Good luck as a human worker hoping to stay ahead of AIs’ capabilities. The jobs upheaval will be a never-ending race which people will only win temporarily, if ever.

Saying otherwise isn’t just a hopeful misstatement, it’s a lie.

But it’s what the AI toffs are telling us so that they can pursue their innovation and profit-making fantasies without the encumbrance of legal or moral guidelines.

They pooh-pooh AI at our own risk.

AI? Let’s Go All In!

Google’s ex-CEO Eric Schmidt believes that AI growth demands for electricity will outpace any preventative measures to reduce harm to the environment (extraction, carbon emissions, etc.), our mitigation efforts aren’t going to work anyway, and we’ll risk “constraining” AI development.

So, we should go “all in” on destroying the planet now, and instead bet that AI will figure out how to save it sometime later on, according to his remarks at some “expert” meeting in DC earlier this month.

I’m all for optimism and I’m hopeful that minds greater than mine will find ways to save me from some of Fate’s cruelty or my own ineptitude, whether those minds are organic or artificial.

Mr. Schmidt has every right to make his personal problems worse because of some vague belief that doing so will enable their solution, but passing off that fantasy as a viable public policy option is irresponsible, at best, and willful deceit, more likely.

Why deceit? Well, he has much to gain from unrestrained AI development, both through an arms developer he founded last year to develop AI-powered drones and his likely investments in technology companies building less overt agents of death.

He also subscribes to a Silicon Valley theology called “Effective Altruism,” which posits that rich, smart tech nerds such as himself have the capacity (and not just the power) to make decisions in the best interests of the rest of us. He has funded lots of organizations, scholarships, and other activities to promote it.

This is how I get to the “willful deceit” analysis.

He knows that people can walk and chew gum at the same time, and that there’s no public policy that doesn’t have to address multiple needs and often competing interests. The idea that building AI and fighting climate change is a binary choice is simply not true; we can and should do both at the same time, with one pursuit informing and at times mitigating the other.

He also knows that relying on AI to do any specific thing at a specific time is a fool’s errand. This is especially true when it comes to solving particularly huge and complicated problems, and climate change is perhaps the hugest and most complex problem we can imagine.

Solving it won’t just require a description of the solution but a series of solutions, most or all of which will themselves be huge and complicated and rely on people, communities, and institutions doing two things at the same time (or more).

And there’s no guarantee that there’ll even be a viable fix by the time that the commensurately smart and electrically well-fed AI comes online to offer it. If there’s some Big Data model that specified with a dependable level of certainty the delivery of a climate change fix that also confirms our ability and willingness to implement it, well, that would have been a nice addition to Mr. Schmidt’s comments.

But it doesn’t exist. He’s “all in” on a wish wrapped in a hope inside a fantasy.

I am regularly dumbfounded by the level of blather and nonsense that passes for “expert” opinion on AI, especially when it comes to how it will impact our lives and world. The wrong people are dictating the wrong terms for our public discourse about AI. We should not be surprised when we reach the wrong conclusions, or when we’re told that they were inevitable.

Why there aren’t more of us standing up when we’re told this dreck and yelling “What the fuck are you talking about?” is beyond me.

Instead, we have to rely on “experts” like Mr. Schmidt telling us that sometime in the distant future, as we waft clouds of carbon from our eyes and gasp for our next breath, an AI will magically appear and tell us what we should do to save the planet.

What if it tells us that we never should have spent all that money and time making AI and destroying the planet in the first place?

Mr. Schmidt will have long since died a very rich man.

And that’s the only certainty he’s banking on.

AI & The Tradition of Regret

AI researcher Geoffrey Hinton won a Nobel Prize earlier this month for his work pioneering the neural networks that help make AI possible. 

Also, he believes that his creation will hasten the spread of misinformation, eliminate jobs, and might one day decide to annihilate humankind.

So, now he’s madly working on ways to keep us safe?

Not quite. He says that he’s “too old to do technical work” and that he consoles himself with what he calls “the normal excuse” that if he hadn’t done it, someone else would have.”

He’s just going to keep reminding us he regrets that we might be doomed.

I guess there’s some intellectual and moral honesty in his position. Since he didn’t help invent a time machine, he can’t go back and undo his past work, and he never intentionally created a weapon of mass destruction. His mental capacity today at 75 is no match for the brainpower he possessed as a young man.

And he gave up whatever salary he was getting at Google so he could sound the alarm (though he’ll likely make more on the speaker’s circuit).

History gives us examples of other innovators who were troubled by and/or tried to make amends for the consequences of their inventions.

In 1789, an opponent of capital punishment named Joseph-Ignace Guillotin proposed a swiftly efficient bladed machine to behead people, er, attached to recommendations for its fair use and protections for its victims’ families. He also hoped that less theatrical executions would draw fewer spectators and reduce public support for the practice.

After 15,000+ people were guillotined during the French Revolution, he spent the remainder of his life speaking on the evils of the death penalty.

In 1867, Alfred Nobel patented an explosive using nitroglycerin called “Nobel’s Safety Powder” – otherwise known as dynamite – that could make mining safer and more efficient. He also opened 90+ armaments factories while claiming that he hoped that equipping two opposing armies with his highly efficient weapons would make them “recoil with horror and disband their troops.”

He created his Peace Prize in his will almost 40 years later to honor “the most or the best work for fraternity among nations.”  While the medal has been awarded annually ever since, there’ve been no reports of opposing armies disbanding because their guns are too good.

In 1945, Robert Oppenheimer and his Manhattan Project team detonated the first successful nuclear weapon, after which he reported quipped “I guess it worked.” Bombs would be dropped on Hiroshima and Nagasaki about a month later, and Oppenheimer’s mood would shift, telling President Truman that “I feel I have blood on my hands,” and he went on to host or participate in numerous learned conclaves on arms control.

No, I’m not overly bothered that Geoffrey Hinton follows in a long tradition of scientists having late-in-life revelations. What frightens and angers me is that the tradition continues.

How many junior Guillotins blindly believe that they can fix a problem with AI without causing other ones? How many Nobels are turning a deaf ear to the reports of their chatbot creations lying or being used to do harm? 

How many Oppenheimers are chasing today’s Little Boy AI – a generally aware AI, or “AGI” – without contemplating the broad implications for their intentions…or planning to take any responsibility for them, whether known or as-yet to be revealed?

You’d think that history would have taught us that scientists need to be more attuned to the implications of their actions. If it had, maybe we’d require STEM students to take courses in morals and personal behavior, or make researchers working on particularly scary stuff submit to periodic therapeutic conversations with psych experts who could help them keep their heads on straight?

Naw, instead we’re getting legislation intended to make sure AI abuses all of us equally, and otherwise absolves its inventors of any culpability if those impacts are deemed onerous.

Oh, and allows its inventors like Mr. Hinton to tell us we’re screwed, collect a prize, and go off to make peace with his conscience.

Stay tuned for a new generation of AI researchers to get older and follow in his footsteps.

And prepare to live with the consequences of their actions, however much or little they regret them.

Bigger AIs Aren’t Better AIs

Turns out that when large language models (“LLMS”) get larger, they get better at certain tasks and worse on others.

Researchers in a group called BigScience found that feeding LLMs more data made them better at solving difficult questions – likely those that required access to that greater data and commensurate prior learning – but at the cost of delivering reliably accurate answers to simpler ones.

The chatbots also got more reckless in their willingness to tee-up those potentially wrong answers.

I can’t help but think of an otherwise smart human friend who gets more philosophically broad and sloppily stupid after a few cocktails.

The scientists can’t explain the cause of this degraded chatbot performance, as the machinations of evermore complex LLMs make such cause/effect assessments more inscrutable. They suspect that it has something to do with user variables like query structure (wording, length, order) or maybe how the results themselves are evaluated, as if a looser definition of accuracy or truth would improve our satisfaction with the outcomes.

The happyspeak technical term for such gyrations is “reliability fluctuations.”

So, don’t worry about the drunken friend’s reasoning…just smile at the entertaining outbursts and shrug at the blather. Take it all in with a grain of salt.

This sure seems to challenge the merits of gigantic, all-seeing and knowing AIs that will make difficult decisions for us.

It also begs questions about why the leading tech toffs are forever searching for more data to vacuum into their ever-bigger LLMs. There’s a mad dash to achieve artificial general intelligence (“AGI”) because it’s assumed there’s some point of hugeness and complexity that will yield a computer that thinks and responds like a human being.

Now we know that the faux person might be a loud drunk.

There’s a contrarian school of thought in AI research and development that suggests smaller is better because a simplified and shortened list of tasks can be accomplished with less data, use less energy, and spit out far more reliable results.

Your smart thermostat doesn’t need to contemplate Nietzsche, it just needs to sense and respond to the temperature. It’s also less likely to decide one day that it wants to annihilate life on the planet.

We already have this sort of AI distributed in devices and processes across our work and personal lives. Imagine if development was focused on making these smaller models smarter, faster, and more efficient, or finding new ways to clarify and synthesize tasks that suggested new ways to build and connect AIs to find big answers by asking ever-smaller questions?

Humanity doesn’t need AGI or evermore garrulous chatbots to solve even our most seemingly intractable problems

We know the answers to things like slowing or reversing climate change, for instance, but we just don’t like them. Our problems are social, political, economic, psychological…not really technological.

And the research coming from BigScience suggests that we’d need to take any counsel from an AI on the subject with that grain of salt anyway.

We should just order another cocktail.

AI And The Dancing Mushroom

It sounds like the title of a Roald Dahl story, but researchers have devised a robot that moves in response to the wishes of a mushroom.

OK, so a shroom might not desire to jump or walk across a room, but they possess neuron-like branch-things called hyphae that transmit electrical impulses in response to changes in light, temperature, and other stimuli.

These impulses can vary in amplitude, frequency, and duration, and mushrooms can share them with one another in a quasi-language that one researcher believes yields at least 50 words that can be organized into sentences.

Still, to call that thinking is probably too generous, though a goodly portion of our own daily cognitive activity is no more, er, thoughtful than similar responses to prompts with the appropriate grunt or simple declaration.

But doesn’t it represent some form of intelligence, informed by some type of awareness?

The video of the dancing mushroom robot suggests that the AI sensed the mushroom’s intentionality to move. It’s not necessarily true, since the researchers had to make some arbitrary decisions about which stimuli would trigger what actions, but the connection between the organism and machine is still quite real, and it suggests stunning potential for the further development of an AI that mediates that interchange.

Much is written about the race to make AI sentient so that we can interact with it as if we were talking to one another, and then it could go on to resolve questions as we would but only better, faster, and more reliably.

Yet, like our own behavior, a majority of what happens around the world doesn’t require such higher-level conversation or contemplation.

There are already many billions of sensors in use that capture changes in light, temperature, and other stimuli, and then prompt programmed responses.

Thermostats trigger HVAC units to start or stop. Radars in airplanes tell pilots to avoid storms and trigger a ping when your car drifts over a lane divider. My computer turned on this morning because the button I pushed sensed my intention and translated it into action.

Big data reads minds, of a sort, by analyzing enough external data so that a predictive model can suggest what we might internally plan to do next. It’s what powers those eerily prescient ads or social media content that somehow has a bulls-eye focus on the topics you love to get angry about.

The mushroom robot research suggests ways to make these connections – between observation and action, between internal states of being and the external world – more nuanced and direct.

Imagine farms where each head of lettuce manages its own feeding and water supply.  House pets that articulate how they feel beyond a thwapping tail or sullen quiet. Urban lawns that can flash a light or shoot a laser to keep dogs from peeing on them.

AI as a cross-species Universal Translator.

It gets wilder after that. Imagine the complex systems of our bodies being able to better manage their interaction, starting with prescribing a bespoke vitamin to start every day and leading to more real-time regulation of water intake, etc. (or microscopic AIs that literally get inside of us and encourage our organs and glands to up their game).

Think of how the AI could be used by people who have infirmities that impede their movement or even block their interaction with the outside world. Faster, more responsive exoskeletons. Better hearing and sight augmentation. Active sensing and responses to counter the frustrating commands of MS or other neurological diseases.

Then, how about looking beyond living things and applying AI models to sense the “intentionally” of, say, a building or bridge to stay upright or resist catching on fire, and then empowering them to “stay healthy” by adjusting stresses of weight and its allocation.

It’s all a huge leap beyond a dancing mushroom robot, but it’s not impossible.

Of course, there’s a downside to such imagined benefits: The same AI that can sense when a mushroom wants to dance will know, by default, how to trigger that intention. Tech that better reads us will be equally adept at reading to us.

The Universal Translator will work both ways.

There are ethical questions here that are profound and worthy of spirited debate, but I doubt we’ll ever have them. AI naysayers will rightly point out that a dancing mushroom robot is a far cry from an AI that reads the minds of inanimate objects, let alone people.

But AI believers will continue their development work.

The dance is going to continue.

California Just Folded On Regulating AI

California’s governor Gavin Newsom has vetoed the nation’s most thoughtful and comprehensive AI safety bill, opting instead to “partner” with “industry experts” to develop voluntary “guardrails.”

Newsom claimed the bill was flawed because it would put onerous burdens and legal culpability on the biggest AI models – i.e. the AI deployments that would be the most complex and impact the most people on the most complicated topics – and thereby “stifle innovation.”

By doing so, it would also disincentivize smaller innovators from building new stuff, since they’d be worried that they’d be held accountable for their actions later.

This argument parroted the blather that came from the developers, investors, politicians and “industry experts” who opposed the legislation…and who’ll benefit most financially from unleashing AI on the world while not taking responsibility for the consequences (except making money).

This is awful news for the rest of us.

Governments are proving to be utterly ineffective in regulating AI, if not downright disinterested in even trying. Only two US states have laws in place (Colorado and Utah), and they’re focused primarily on making sure users follow existing consumer protection requirements.

On a national level, the Feds have little going but pending requirements that AI developers assess their work and file reports, which is like what the EU has recently put into law.

It’s encouragement to voluntarily do the right thing, whatever that is.

Well, without any meaningful external public oversight, the “right thing” will be whatever those AI developers, investors, politicians, and “industry experts” think it is. This will likely draw on the prevailing Silicon Valley sophistry known as Effective Altruism, which claims that technologists can distill any messy challenge into an equation that will yield the best solution for the most people.

Who needs oversight from ill-informed politicians when the smartest and brightest (and often richest) tech entrepreneurs can arrive at such genius-level conclusions on their own?

Forget worrying about AIs going rogue and treating shoppers unfairly or deciding to blow up the planet; what if it does exactly what we’ve been promised it will do?

Social impacts of a world transformed by AI usage? Plans for economies that use capitalized robots in place of salaried workers? Impacts on energy usage, and thereby global climate change, from those AI servers chugging electricity?

Or, on a more personal level, will you or I get denied medical treatment, school or work access, or even survivability in a car crash because some database says that we’re worth less to society than someone else?

Don’t worry, the AI developers, investors, politicians, and “industry experts” will make those decisions for us.

Even though laws can be changed, amended, rescinded, and otherwise adapted to evolving insights and needs, California has joined governments around the world in choosing to err on the side of cynical neglect over imperfect oversight.

Don’t hold AI developers, investors, politicians, and “industry experts” accountable for their actions. Instead, let’s empower them to benefit financially from their work while shifting all the risks and costs onto the rest of us.

God forbid we stifle their innovation.