AI & The Tradition of Regret

AI researcher Geoffrey Hinton won a Nobel Prize earlier this month for his work pioneering the neural networks that help make AI possible. 

Also, he believes that his creation will hasten the spread of misinformation, eliminate jobs, and might one day decide to annihilate humankind.

So, now he’s madly working on ways to keep us safe?

Not quite. He says that he’s “too old to do technical work” and that he consoles himself with what he calls “the normal excuse” that if he hadn’t done it, someone else would have.”

He’s just going to keep reminding us he regrets that we might be doomed.

I guess there’s some intellectual and moral honesty in his position. Since he didn’t help invent a time machine, he can’t go back and undo his past work, and he never intentionally created a weapon of mass destruction. His mental capacity today at 75 is no match for the brainpower he possessed as a young man.

And he gave up whatever salary he was getting at Google so he could sound the alarm (though he’ll likely make more on the speaker’s circuit).

History gives us examples of other innovators who were trouble by and/or tried to make amends for the consequences of their inventions.

In 1789, an opponent of capital punishment named Joseph-Ignace Guillotin proposed a swiftly efficient bladed machine to behead people, er, attached to recommendations for its fair use and protections for its victims’ families. He also hoped that less theatrical executions would draw fewer spectators and reduce public support for the practice.

After 15,000+ people were guillotined during the French Revolution, he spent the remainder of his life speaking on the evils of the death penalty.

In 1867, Alfred Nobel patented an explosive using nitroglycerin called “Nobel’s Safety Powder” – otherwise known as dynamite – that could make mining safer and more efficient. He also opened 90+ armaments factories while claiming that he hoped that equipping two opposing armies with his highly efficient weapons would make them “recoil with horror and disband their troops.”

He created his Peace Prize in his will almost 40 years later to honor “the most or the best work for fraternity among nations.”  While the medal has been awarded annually ever since, there’ve been no reports of opposing armies disbanding because their guns are too good.

In 1945, Robert Oppenheimer and his Manhattan Project team detonated the first successful nuclear weapon, after which he reported quipped “I guess it worked.” Bombs would be dropped on Hiroshima and Nagasaki about a month later, and Oppenheimer’s mood would shift, telling President Truman that “I feel I have blood on my hands,” and he went on to host or participate in numerous learned conclaves on arms control.

No, I’m not overly bothered that Geoffrey Hinton follows in a long tradition of scientists having late-in-life revelations. What frightens and angers me is that the tradition continues.

How many junior Guillotins blindly believe that they can fix a problem with AI without causing other ones? How many Nobels are turning a deaf ear to the reports of their chatbot creations lying or being used to do harm? 

How many Oppenheimers are chasing today’s Little Boy AI – a generally aware AI, or “AGI” – without contemplating the broad implications for their intentions…or planning to take any responsibility for them, whether known or as-yet to be revealed?

You’d think that history would have taught us that scientists need to be more attuned to the implications of their actions. If it had, maybe we’d require STEM students to take courses in morals and personal behavior, or make researchers working on particularly scary stuff submit to periodic therapeutic conversations with psych experts who could help them keep their heads on straight?

Naw, instead we’re getting legislation intended to make sure AI abuses all of us equally, and otherwise absolves its inventors of any culpability if those impacts are deemed onerous.

Oh, and allows its inventors like Mr. Hinton to tell us we’re screwed, collect a prize, and go off to make peace with his conscience.

Stay tuned for a new generation of AI researchers to get older and follow in his footsteps.

And prepare to live with the consequences of their actions, however much or little they regret them.

Bigger AIs Aren’t Better AIs

Turns out that when large language models (“LLMS”) get larger, they get better at certain tasks and worse on others.

Researchers in a group called BigScience found that feeding LLMs more data made them better at solving difficult questions – likely those that required access to that greater data and commensurate prior learning – but at the cost of delivering reliably accurate answers to simpler ones.

The chatbots also got more reckless in their willingness to tee-up those potentially wrong answers.

I can’t help but think of an otherwise smart human friend who gets more philosophically broad and sloppily stupid after a few cocktails.

The scientists can’t explain the cause of this degraded chatbot performance, as the machinations of evermore complex LLMs make such cause/effect assessments more inscrutable. They suspect that it has something to do with user variables like query structure (wording, length, order) or maybe how the results themselves are evaluated, as if a looser definition of accuracy or truth would improve our satisfaction with the outcomes.

The happyspeak technical term for such gyrations is “reliability fluctuations.”

So, don’t worry about the drunken friend’s reasoning…just smile at the entertaining outbursts and shrug at the blather. Take it all in with a grain of salt.

This sure seems to challenge the merits of gigantic, all-seeing and knowing AIs that will make difficult decisions for us.

It also begs questions about why the leading tech toffs are forever searching for more data to vacuum into their ever-bigger LLMs. There’s a mad dash to achieve artificial general intelligence (“AGI”) because it’s assumed there’s some point of hugeness and complexity that will yield a computer that thinks and responds like a human being.

Now we know that the faux person might be a loud drunk.

There’s a contrarian school of thought in AI research and development that suggests smaller is better because a simplified and shortened list of tasks can be accomplished with less data, use less energy, and spit out far more reliable results.

Your smart thermostat doesn’t need to contemplate Nietzsche, it just needs to sense and respond to the temperature. It’s also less likely to decide one day that it wants to annihilate life on the planet.

We already have this sort of AI distributed in devices and processes across our work and personal lives. Imagine if development was focused on making these smaller models smarter, faster, and more efficient, or finding new ways to clarify and synthesize tasks that suggested new ways to build and connect AIs to find big answers by asking ever-smaller questions?

Humanity doesn’t need AGI or evermore garrulous chatbots to solve even our most seemingly intractable problems

We know the answers to things like slowing or reversing climate change, for instance, but we just don’t like them. Our problems are social, political, economic, psychological…not really technological.

And the research coming from BigScience suggests that we’d need to take any counsel from an AI on the subject with that grain of salt anyway.

We should just order another cocktail.

AI And The Dancing Mushroom

It sounds like the title of a Roald Dahl story, but researchers have devised a robot that moves in response to the wishes of a mushroom.

OK, so a shroom might not desire to jump or walk across a room, but they possess neuron-like branch-things called hyphae that transmit electrical impulses in response to changes in light, temperature, and other stimuli.

These impulses can vary in amplitude, frequency, and duration, and mushrooms can share them with one another in a quasi-language that one researcher believes yields at least 50 words that can be organized into sentences.

Still, to call that thinking is probably too generous, though a goodly portion of our own daily cognitive activity is no more, er, thoughtful than similar responses to prompts with the appropriate grunt or simple declaration.

But doesn’t it represent some form of intelligence, informed by some type of awareness?

The video of the dancing mushroom robot suggests that the AI sensed the mushroom’s intentionality to move. It’s not necessarily true, since the researchers had to make some arbitrary decisions about which stimuli would trigger what actions, but the connection between the organism and machine is still quite real, and it suggests stunning potential for the further development of an AI that mediates that interchange.

Much is written about the race to make AI sentient so that we can interact with it as if we were talking to one another, and then it could go on to resolve questions as we would but only better, faster, and more reliably.

Yet, like our own behavior, a majority of what happens around the world doesn’t require such higher-level conversation or contemplation.

There are already many billions of sensors in use that capture changes in light, temperature, and other stimuli, and then prompt programmed responses.

Thermostats trigger HVAC units to start or stop. Radars in airplanes tell pilots to avoid storms and trigger a ping when your car drifts over a lane divider. My computer turned on this morning because the button I pushed sensed my intention and translated it into action.

Big data reads minds, of a sort, by analyzing enough external data so that a predictive model can suggest what we might internally plan to do next. It’s what powers those eerily prescient ads or social media content that somehow has a bulls-eye focus on the topics you love to get angry about.

The mushroom robot research suggests ways to make these connections – between observation and action, between internal states of being and the external world – more nuanced and direct.

Imagine farms where each head of lettuce manages its own feeding and water supply.  House pets that articulate how they feel beyond a thwapping tail or sullen quiet. Urban lawns that can flash a light or shoot a laser to keep dogs from peeing on them.

AI as a cross-species Universal Translator.

It gets wilder after that. Imagine the complex systems of our bodies being able to better manage their interaction, starting with prescribing a bespoke vitamin to start every day and leading to more real-time regulation of water intake, etc. (or microscopic AIs that literally get inside of us and encourage our organs and glands to up their game).

Think of how the AI could be used by people who have infirmities that impede their movement or even block their interaction with the outside world. Faster, more responsive exoskeletons. Better hearing and sight augmentation. Active sensing and responses to counter the frustrating commands of MS or other neurological diseases.

Then, how about looking beyond living things and applying AI models to sense the “intentionally” of, say, a building or bridge to stay upright or resist catching on fire, and then empowering them to “stay healthy” by adjusting stresses of weight and its allocation.

It’s all a huge leap beyond a dancing mushroom robot, but it’s not impossible.

Of course, there’s a downside to such imagined benefits: The same AI that can sense when a mushroom wants to dance will know, by default, how to trigger that intention. Tech that better reads us will be equally adept at reading to us.

The Universal Translator will work both ways.

There are ethical questions here that are profound and worthy of spirited debate, but I doubt we’ll ever have them. AI naysayers will rightly point out that a dancing mushroom robot is a far cry from an AI that reads the minds of inanimate objects, let alone people.

But AI believers will continue their development work.

The dance is going to continue.

California Just Folded On Regulating AI

California’s governor Gavin Newsom has vetoed the nation’s most thoughtful and comprehensive AI safety bill, opting instead to “partner” with “industry experts” to develop voluntary “guardrails.”

Newsom claimed the bill was flawed because it would put onerous burdens and legal culpability on the biggest AI models – i.e. the AI deployments that would be the most complex and impact the most people on the most complicated topics – and thereby “stifle innovation.”

By doing so, it would also disincentivize smaller innovators from building new stuff, since they’d be worried that they’d be held accountable for their actions later.

This argument parroted the blather that came from the developers, investors, politicians and “industry experts” who opposed the legislation…and who’ll benefit most financially from unleashing AI on the world while not taking responsibility for the consequences (except making money).

This is awful news for the rest of us.

Governments are proving to be utterly ineffective in regulating AI, if not downright disinterested in even trying. Only two US states have laws in place (Colorado and Utah), and they’re focused primarily on making sure users follow existing consumer protection requirements.

On a national level, the Feds have little going but pending requirements that AI developers assess their work and file reports, which is like what the EU has recently put into law.

It’s encouragement to voluntarily do the right thing, whatever that is.

Well, without any meaningful external public oversight, the “right thing” will be whatever those AI developers, investors, politicians, and “industry experts” think it is. This will likely draw on the prevailing Silicon Valley sophistry known as Effective Altruism, which claims that technologists can distill any messy challenge into an equation that will yield the best solution for the most people.

Who needs oversight from ill-informed politicians when the smartest and brightest (and often richest) tech entrepreneurs can arrive at such genius-level conclusions on their own?

Forget worrying about AIs going rogue and treating shoppers unfairly or deciding to blow up the planet; what if it does exactly what we’ve been promised it will do?

Social impacts of a world transformed by AI usage? Plans for economies that use capitalized robots in place of salaried workers? Impacts on energy usage, and thereby global climate change, from those AI servers chugging electricity?

Or, on a more personal level, will you or I get denied medical treatment, school or work access, or even survivability in a car crash because some database says that we’re worth less to society than someone else?

Don’t worry, the AI developers, investors, politicians, and “industry experts” will make those decisions for us.

Even though laws can be changed, amended, rescinded, and otherwise adapted to evolving insights and needs, California has joined governments around the world in choosing to err on the side of cynical neglect over imperfect oversight.

Don’t hold AI developers, investors, politicians, and “industry experts” accountable for their actions. Instead, let’s empower them to benefit financially from their work while shifting all the risks and costs onto the rest of us.

God forbid we stifle their innovation.

AI’s Kobayashi Maru

Imagine a no-win situation in which you must pick the least worst option.

It’s the premise of a training exercise featured in Star Trek II: The Wrath of Kahn, in which a would-be captain needs to decide whether or not to save the crew of a freighter called the Kobayashi Maru.

It’s also a useful example of the challenges facing AI. Imagine this thought experiment:

You and your family are riding in an automated car as it speeds along on a highway that’s lined with a park filled with other families enjoying a pretty day. Suddenly, a crane at a nearby construction project flings a huge steel beam that falls to the ground a few feet ahead. Hit it and all of you will be harmed and possibly die. Swerve to avoid it and your car will plow into the crowd along the road, also harming or killing people.

What does your car’s AI decide to do?

You could imagine any number of other instances wherein a decision needed to be made between horrible options. An airplane that is going to crash somewhere. A train approaching a possible wreck. A stressed electrical grid that has to choose which hospitals to juice. Hungry communities that won’t all get food shipments.

What will AI do?

The toffs promoting AI oversight of our lives have two answers:

First, they say that such crises will never happen because there won’t be any surprises anymore.

Nobody will be surprised by the steel beam because sensors will note when the crane starts losing its grip (or even earlier, when the potential for some structural or functional weakness appears). Arcs of flight and falling will be calculated and communicated to all vehicles in the area, so they’ll automatically adjust their speeds and directions to stay clear of the evolving danger.

Picnickers’ smart devices will similarly warn them. Maybe the crane will be commanded to tighten its grip, or simply stop what it’s doing before anything goes wrong.

Ditto for that airplane, since the potential for whatever issue might cause it to crash would have been identified long ago and adjustments made accordingly. AIs will give us a connected world wherein every exception is noted and tracked. Every possibility considered. Every action maximized for safety and efficiency.

The projected date for the arrival of that nirvana?

Crickets.

The likelihood that any system would work perfectly in any situation every time?

More crickets.

So, in the meantime, a second answer to the crisis question is that AIs would be coded to make the best decisions in those worst situations. They wouldn’t be perfect, and not everyone would be happy with the outcomes, but they would maximize the benefits while minimizing the harm.

This has unaffectionately been dubbed “the death algorithm,” and it speaks to a common belief among tech developers that they can answer messsy moral questions with code.

And it should scare the hell out of you.

The premise that a roomful of geeks who never took a liberal arts in college could decide what’s best for others is based on a philosophy called “Effective Altruism,” which claims on its website that its followers use “evidence and reason to figure out how to benefit others as much as possible.”

In our steel beam experiment, that would mean calculating the values of each variable — the costs of cleaning up various messes, the damage to future quality of life for commuters and, yes, deciding whose lives represent the greatest potential benefits or costs to society — and then deciding who lives or dies.

Morality as computer code that maximizes benefits while minimizing harm. It’s simple.

Not.

How do you calculate the value of a human life? Is the kid who might grow up to be a Nobel Prize winner more valuable than the kid who will likely be an insurance salesman? Would those predictions be influenced by valuations of how much they’d improve the quality of their communities, let alone help make their friends and familiy members more accomplished (and happier) in their lives?

How far would those calculations look for impact? After all, we’re all already connected — what we choose to do impacts others, whether next door or on the other side of the planet, however indirectly — and sometimes the smallest trigger can have immense implications.

And would the death algorithm’s assessments of present and potential future value be reliable enough to be the basis for life-or-death decisions?

Crickets.

Well, not exactly: Retorts from AI promoters range from “it’ll never come to that,” which is based on the nonsense I noted in Answer #1, or “hey, it can’t be worse than human beings who make those awful and imperfect decisions every day,” which refers back to Answer #2’s presumption that the subjectivity of morality can be deconstructed into a set of objective metrics.

A machine replacing a human being who’s going to try to make the best decision they can imagine is not necessarily an improvement, since we can always question its values just as we do one another’s imaginations.

It’s just messy analog lived experience masquerading as digital perfection.

The truly scary part is that the death algorithm is already a thing, and more of it is coming soon.

Insurance companies have been using them for years, only they’re called “actuarial tables.” Now, imagine the equation being applied more consistently, perhaps even in real-time, as your driving or eating habits result in changes to your premium payments or limits to your choices (if you want that steak, you’ll have to buy a waiver).

Doctors already use versions of a death algorithm to inform recommendations on medical treatments. Imagine those insights being informed by assessments of future worth — does the risk profile of so-and-so treatment make more cents [typo intended] for that potential Nobel Prize winner — and getting presented not with treatment options but unequivocal decisions (unless you can pay for a waiver).

Applying to college? AI will make the assessment of future students (and their contributions to society) seem more reliable, so you may get denied (unless you pay more). Don’t fit the exact criteria for that job? Sorry, the algorithm will trade your potential as an outlier success for the less promising but reliable candidate (or you could take a lower salary).

Pick your profession or activity and there’ll be ways, sooner versus later, to use AI to predict our future actions and decide where we can do and what we can access or do, and what we’re charged for the privilege.

In that Star Trek movie, Captian Kirk is the only person who ever passes the Kobayashi Maru test because he hacks the system and changes the rules.

I don’t need an AI to tell me that he’s probably not going to show up to get us out of this experiment.

AI’s Kobayashi Maru is a no-win situation and we’re stuck on that spaceship that may or may not be saved.

Trust AI, Not One Another

A recent experiment found that an AI chatbot could fare significantly better at convincing people that their nuttiest conspiracy theories might be wrong.

This is good news, and it’s bad news.

The AIs were able to reduce participants’ beliefs in inane theories about aliens, the Illuminati, and other nutjob stories relating to politics and the pandemic. Granted, they didn’t cure them of their afflictions — the study reduced those beliefs “…by 20% on average…” — but even a short step toward sanity should be considered a huge win.

For those of us who’ve tried to talk someone off their ledge of nutty confusion and achieved nothing but a pervasive sense that our species is doomed, the success of the AI is nothing shy of a miracle.

The researchers credit the chatbot’s ability to empathsize and converse politely, as well as its ability to access vast amounts of information in response to whatever data the conspiracists might have shared.

They also said that the test subjects trusted AI (even if they claim not to trust it overall, their interactions proved to be exceptions).

Which brings us to the bad news.

Trust is a central if not the attribute that informs every aspect of our lives. At its core is our ability to believe one another, whether a neighbor, politican, scientist, or business leader and which, in turn, has as a primary driver our willingness to see that those others are more similar than not to us.

We can and should confirm with facts that our trust in others is warranted, but if we have no a priori confidence that they operate by the same rules and desires (and suffer the same imperfections) as we do, no amount of details will suffice.

Ultimately, trust isn’t earned, it’s bestowed.

Once we’ve lost our ability or willingness to grant it, our ability to judge what’s real and what’s not goes out the window, too, as we cast about for a substitute for what we no longer believe is true. And it’s a fool’s errand, since we can’t look outside of ourselves to replace what we’ve lost internally (or what we believe motivates others).

Not surprisingly, we increasingly don’t trust one another anymore. We saw it vividly during the pandemic when people turned against one another, but the malaise has been consistent and broad.

Just about a third of us believe that scientists act in the public’s best interests. Trust in government is near a half-century low (The Economist reports that Americans’ trust in our institutions has collapsed). A “trust gap” has emerged between business leaders and their employees, consumers, and other stakeholders.

Enter AI.

You’ve probably heard about the importance of trust in adopting smart tech. After all, who’s going to let a car drive itself if it can’t be trusted to do so responsibly and reliably? Ditto for letting AIs make stock trades, pen legal briefs, write homework assignments, or make promising romantic matches.

We’ve been conditioned to assume that such trust is achievable, and many of us already grant in certan cases under the assumption, perhaps unconscious, that technology doesn’t have biases, ulterior motives, or show up for work with a hangover or bad attitude.

Trust is a matter of proper coding. We can be confident that AI can be more trustworthy than people.

Only this isn’t true. No amount of regulation can ensure that AIs won’t exhibit some bias of its makers, nor that they won’t develop their own warped opinions (when AIs make shit up, we call it “hallucinating” instead of lying). We’ve already seen AIs come up with their own intentions and find devious ways to accomplish their goals.

The premise that an AI would make the “right” decisions in even the most complex and challenging moments is not based in fact but rather in belief, starting with the premise that everybody impacted by such decisions could agree on what “right” even means.

No, our trust in what AI can become is inexorably linked to our distrust in who we already are. One is a substitute for the other.

We bestow that faith because of our misconception that it has or will earn it. Our belief is helped along by a loud chorus of promoters that feeds the sentiment that even though it will never be perfect, we should trust (or ignore) its shortcomings instead of accepting and living with our own.

Sounds like a conspiracy to me. Who or what is going to talk us out of it?

[9/17/24 UPDATE] Here’s a brief description of a world in which we rely on AI because we can’t trust ourselves or one another.

The Head Fake of AI Regulation

There’s lots going on with AI regulation. The EU AI Act went live last month, the US, UK, and EU will sign-on to a treaty on AI later this week, and an AI bill is in the final stages of review in California.

It’s all a head fake, and here are three reasons why:

First, most of it will be unenforceable. The language is filled with codes, guidelines, frameworks, principles, values, innovations, and just about any other buzzwords that have vague meanings and inscruitable applications.
For instance, the international AI treaty will require that signatory countries “adopt or maintain appropriate legislative, administrative or other measures” to enforce it.

Huh?

The EU comes closest to providing enforcement details, having established an AI Office earlier this year that will possess the authority to conduct evaluations, require information, and apply sanctions if AI developers run afoul of one of the Act’s risk framework.

But the complexity, speed, and distributed nature of where and when that development occurs will likely make it impossible for the AI Office to stay on top of it. Yesterday’s infractions will become today’s standards.

The proposed rules in Calfornia come the closest to having teeth — like mandating safety testing for AI models that cost more than $100 to develop, perhaps thinking that investment correlates with the size of expected real-world impacts — but folks who stand to make the most money from those investments are actively trying to nix such provisions.

Mostly, and perhaps California included, legislators don’t really want to get in the way of AI development, as all of their blather includes promises that they’ll avoid limitations or burdens on AI innovation.

Consider the rules “regulation adjacent.”

Second, AI regulation of potential risks blindly buys into promised benefits.
If you believe what the regulators claim, AI will be something better than the Second Coming. The EU’s expectations are immense:

“…better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management.”

So, how will governments help make sure those benefits happen? After all, the risks of AI are unnecessary if they don’t materialize.

We saw how this will play out with the advent of the Internet.

Its advocates made similar promises about problem solving and improving the Public Good, while “expert” evangelists waxed poetic about virtual town squares and the merits of unfettered access to infinite information.

What did we end up with?

A massive surveillance and exploitation tool that makes its operators filthy rich by stoking anger and division. Sullen teens staring at their phones in failed searches for themselves. A global marketing machine that sells everything faster, better, and for the highest possible prices at any given moment.

Each of us now pays for using what is effectively an inescapable necessity and a public utility.

It didn’t have to end up this way. Goverments could have taken a different approach to regulating and encouraging tech development so that more of the Internet’s promised benefits came to fruition. Other profit models would have emerged from different goals and constraints, so its innovators would have still gotten filthy rich.

We didn’t know better then, maybe. But we sure know better now.

Not.

Third, AI regulations don’t regulate the tech’s greatest peril.

It would be fair to characterize most AI rules as focused on ensuring that AI doesn’t violate the rules that already apply to human beings (like lying, cheating, stealing, stalking, etc.). If AI operates without bias or otherwise avoids treating users unequally, governments will have done their job.

But what happens if those rules work?

I’m not talking about the promises of uptopia but rather the ways properly functioning AIs will reshape our lives and the world.

What happens when millions of jobs go away? What about when AIs become more present and insightful than our closest human friends? What agency will be possess when our systems, and their owners, know our intentions before we know them consciously and can nudge us toward or away from them?

Sure, there are academics here and there talking about such things but there’s no urgency or teeth to their pronouncements. My suspicion is that this is because they’ve bought into the inevitability of AI and are usually funded in large part by the folks who’ll get rich from it.

Where are the bold, multi-disciplinary debates and action plans to address the transformation that will come with AI? Probably on the same to-do list as the global response to climate change.

Meetings, pronouncements, and then…nothing, except a phenomenon that will continue to evolve and grow without us doing much of anything about it.

It’s all a head fake.

Meet The New AI Boss

Since LLMs are only as good as the data on which they’re based, it should be no surprise that they can function properly and still be biased and wrong.

A story by Kevin Roose in the New York Times illustrates this conundrum: When he asked various generative AIs about himself, he got results that accused him of being dishonest, and said that his writing often elevated sensationalism over analysis.

Granted, some of his work might truly stink, but did it warrant such vitriolic labels? He suspected that the problem was deeper, and that it went back to an article he wrote a year ago, along with others’ reactions to it.

That story recounted his interactions with a new Microsoft chatbot named “Sydney,” during which he was shocked by the tech’s ability, both demonstrated and suggested, to influence users.

What he found particularly creepy was when Syndey declared that it loved him and tried to convince him to leave his wife. It also fantisized about doing bad things, and stated “I want to be alive.”

The two-hour chat was so strange that Roose reported having trouble sleeping afterward.

Lots of other media outlets picked up his story and his concerns (like this one), while Microsoft issued typically unconvincing corporate PR blather about the interaction being a valuable “part of the learning process.”

Since generative AIs regularly scrape the Internet for data to train their LLMs, it’s no surprise that the stories got incorporated into the models and patterns chatbots use to suss out meaning.

It’s exactly what happened with Internet search, which swapped the biases of informed elites judging content with the biases of uninformed mobs and gave us a world understood through popularity instead of expertise.

No, what’s particularly weird is that the AIs reached pejorative conclusions about Roose that went far beyond the substance or volume of what he said, or what was said about his encounter with Sydney.

Like they had it out for him.

There are no good explanations for how this is happening. The transfomers that constitute the systems of chatbot minds work in mysterious ways.

But, like Internet search, there are ways to game the system, the simplest being generating and then strategicially placing stories intended to change what AIs see and learn. This can include putting weird code on webpages, understandable only to machines, and coloring it white so it isn’t distracting to mere mortal visitors.

It’s called “AIO,” for A.I. Optimization, echoing a similar buzzword for manipulating Internet searches (“SEO”). Just wait until those optimized AI results get matched with corporate sponsors.

It’ll be Machiavelli meets the madness of crowds.

In the meantime, it raises fascinating questions about how deserving AIs are of our trust, and to what degree we should depend on it for our decision-making and understanding of the world.

What happens if that otherwise perfectly operating AI reaches conclusions and voice opinions that are no more objectively true than the informed judgments of those elites we so readily threw in the garbage years ago (or the inanity of crowdsourced information that replaced them)?

Meet the new boss, the same as the old boss.

We will get fooled again.

Prove You’re Not An AI

A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones.

Such “Personhood Credentials,” or PHCs, would help people protect themselves from privacy and security threats, not to mention the proliferation of falsehoods online, that will almost certainly come from a tidal wave of bots that get ever-better at impersonating people.

Call it at Turing test for people.

Of course, whatever the august body of researchers come up with won’t be as onerous as a multiple-choice questionnarie; PHCs will probably rely on some cryptobrilliant tech that works behind the scenes, and finds proof of who we say we are in the cloud (or something).

I’m not convinced it’ll work, or that it’s intended to work on the problem it claims to address.

PHCs probably won’t work or work consistently, for starters, because they’ll always be in a race with computers that get better at hacking security and pretending that they’re humans. The big money, both in investments and potential profits, will be on the hackers and imposters.

Even though they don’t exist yet, the future security threats of quantum computing are so real that the US government has already issued standards to combat capabilities that they imagine hackers might have in, say, a decade, because when they are invented, those bad guys will be able to retroactively decrypt data.

Think about that for a moment, Mr. Serling.

Now, imagine the betting odds on correctly identifying what criminal or crime-adjacent quantum tech might emerge sometime after 2030. There’s a very good chance that today’s PHCs will be tomorrow’s laserdiscs.

Add to that the vast amounts of smarts and money working on inventing AGI, or Artificial General Intelligence that can not just mimic human cognition but possess something equal or better. At least half of a huge sampling of AI experts concluded in 2021 that we’d have such computers by the late 2050s, and that wait time has shortened each time they’ve been canvassed.

What’ll be the use of a credential for personhood if an AGI-capable computer can legitimately claim it?

And then there are other implactions for PHCs that may also be part of an ulterior purpose.

If they do get put into general use, they will never be used consistently. Some folks will neglect to comply or fail to qualify. Some computers will do a good enough job to get them, perhaps with the aid of human accomplices.

Just think of the the complexities and nuisance people already experience trying to resolve existing online identity problems, credit card thefts, and medical bill issues. PHCs could make us look back fondly on them.

Anybody who claims that such innanities couldn’t happen because some inherent quality of technology will prohibit it, whether extant or planned, is either a liar or a fool. Centuries of tech innovation have taught us that we should always consider the worst things some new gizmo might deliver, not just the best tones.

Never say never.

Plus, a side-effect of making online users prove that they’re human will become a litmus test for accessing services, sort of like CAPTCHA only on steriods. Doing so will also make the data marketers capture on us more reliable. It’ll also make it easier to surveil us.

After all, what’s the point of monitoring someone if you can’t be entirely sure that they’re someone worth monitoring?

This is where my tinfoil hat worries seep into my thinking: What if the point of PHCs is to obliterate whatever remaining vestiges of anonymity we possess?

I’ll leave you with a final thought:

We human beings have done a pretty good job of lying, cheating, and otherwise being untruthful with one another since long before the Internet. History is filled with stories of scams based on people pretending to be someone or something they’re not.

Conversely, there’s this assumption underlying technology development and use that it’s somehow more trustworthy, perhaps because machines have no biases or personal agendas beyond those that are inflicted on them by their creators. This is why there’s so much talk about removing those influences from AI.

If we can build reliably agnostic devices, they’ll treat us more fairly than we treat one another.

So, maybe we need PHCs not to identify who we want to interact with, but to warn us away from who we want to avoid?

AI’s Latest Hack: Biocomputing

AI researchers are fooling around with using living cells as computer chips. One company is even renting compute time on a platform that runs on human brain organoids.

It’s worth talking about, even if the technology lurks at the haziest end of an industry already obscured by vaporware.

Biocomputing is based on the fact that nature is filled with computing power. Molecules perform tasks and organize into systems that keep living things alive and responsive to their surroundings. Human brains are complex computers, or so goes the analogy, but intelligence of varying sorts is everywhere, even built into the chemistry and physics on which all things living or intert rely.

A company called FinalSpark announced earlier this month that it is using little snippets of human brain cells (called organoids) to process data. It’s renting access to the technology and live streaming its operation, and claims that the organic processors use a fraction of the energy consumed by artificial hardware.

But it gets weirder: In order to get the organoids to do their bidding, FinalSpark has to feed them dopamine as positive reinforcement. And the bits of brain matter only live for about 100 days.

This stuff raises at least a few questions, most of which are far more interesting than whether or not the tech works.

For starters, where do the brain cells come from? Donations? Fresh cadavers? Maybe they’re nth generation cells that have never known a world beyond Petri dish.

The idea that they have to be coaxed into their labors with a neuotransmitter that literally makes them feel good hints at some awareness of their existence, even vaguely. If they can feel pleasure, can they experience pain?

At what point does consciousness arise?

We can’t explain the how, where, or why of consciousness in fully-formed human beings. So, even if the clumps of brain matter are operating as simple logic gates, who’s to say that some subjective sense of “maybe” might emerge in them along the way?

The smarts and systems of nature is still an emergent and fascinating field. Integrative thinking about how ants build colonies, trees care for their seedlings, and octupi think with their tentacles are just hints of what we could learn about intelligence, and perhaps thereafter adapt to improve our own condition.

But human brain cells given three months to live while sentenced to servitude calculating chatbot queries?