AI’s Kobayashi Maru

Imagine a no-win situation in which you must pick the least worst option.

It’s the premise of a training exercise featured in Star Trek II: The Wrath of Kahn, in which a would-be captain needs to decide whether or not to save the crew of a freighter called the Kobayashi Maru.

It’s also a useful example of the challenges facing AI. Imagine this thought experiment:

You and your family are riding in an automated car as it speeds along on a highway that’s lined with a park filled with other families enjoying a pretty day. Suddenly, a crane at a nearby construction project flings a huge steel beam that falls to the ground a few feet ahead. Hit it and all of you will be harmed and possibly die. Swerve to avoid it and your car will plow into the crowd along the road, also harming or killing people.

What does your car’s AI decide to do?

You could imagine any number of other instances wherein a decision needed to be made between horrible options. An airplane that is going to crash somewhere. A train approaching a possible wreck. A stressed electrical grid that has to choose which hospitals to juice. Hungry communities that won’t all get food shipments.

What will AI do?

The toffs promoting AI oversight of our lives have two answers:

First, they say that such crises will never happen because there won’t be any surprises anymore.

Nobody will be surprised by the steel beam because sensors will note when the crane starts losing its grip (or even earlier, when the potential for some structural or functional weakness appears). Arcs of flight and falling will be calculated and communicated to all vehicles in the area, so they’ll automatically adjust their speeds and directions to stay clear of the evolving danger.

Picnickers’ smart devices will similarly warn them. Maybe the crane will be commanded to tighten its grip, or simply stop what it’s doing before anything goes wrong.

Ditto for that airplane, since the potential for whatever issue might cause it to crash would have been identified long ago and adjustments made accordingly. AIs will give us a connected world wherein every exception is noted and tracked. Every possibility considered. Every action maximized for safety and efficiency.

The projected date for the arrival of that nirvana?

Crickets.

The likelihood that any system would work perfectly in any situation every time?

More crickets.

So, in the meantime, a second answer to the crisis question is that AIs would be coded to make the best decisions in those worst situations. They wouldn’t be perfect, and not everyone would be happy with the outcomes, but they would maximize the benefits while minimizing the harm.

This has unaffectionately been dubbed “the death algorithm,” and it speaks to a common belief among tech developers that they can answer messsy moral questions with code.

And it should scare the hell out of you.

The premise that a roomful of geeks who never took a liberal arts in college could decide what’s best for others is based on a philosophy called “Effective Altruism,” which claims on its website that its followers use “evidence and reason to figure out how to benefit others as much as possible.”

In our steel beam experiment, that would mean calculating the values of each variable — the costs of cleaning up various messes, the damage to future quality of life for commuters and, yes, deciding whose lives represent the greatest potential benefits or costs to society — and then deciding who lives or dies.

Morality as computer code that maximizes benefits while minimizing harm. It’s simple.

Not.

How do you calculate the value of a human life? Is the kid who might grow up to be a Nobel Prize winner more valuable than the kid who will likely be an insurance salesman? Would those predictions be influenced by valuations of how much they’d improve the quality of their communities, let alone help make their friends and familiy members more accomplished (and happier) in their lives?

How far would those calculations look for impact? After all, we’re all already connected — what we choose to do impacts others, whether next door or on the other side of the planet, however indirectly — and sometimes the smallest trigger can have immense implications.

And would the death algorithm’s assessments of present and potential future value be reliable enough to be the basis for life-or-death decisions?

Crickets.

Well, not exactly: Retorts from AI promoters range from “it’ll never come to that,” which is based on the nonsense I noted in Answer #1, or “hey, it can’t be worse than human beings who make those awful and imperfect decisions every day,” which refers back to Answer #2’s presumption that the subjectivity of morality can be deconstructed into a set of objective metrics.

A machine replacing a human being who’s going to try to make the best decision they can imagine is not necessarily an improvement, since we can always question its values just as we do one another’s imaginations.

It’s just messy analog lived experience masquerading as digital perfection.

The truly scary part is that the death algorithm is already a thing, and more of it is coming soon.

Insurance companies have been using them for years, only they’re called “actuarial tables.” Now, imagine the equation being applied more consistently, perhaps even in real-time, as your driving or eating habits result in changes to your premium payments or limits to your choices (if you want that steak, you’ll have to buy a waiver).

Doctors already use versions of a death algorithm to inform recommendations on medical treatments. Imagine those insights being informed by assessments of future worth — does the risk profile of so-and-so treatment make more cents [typo intended] for that potential Nobel Prize winner — and getting presented not with treatment options but unequivocal decisions (unless you can pay for a waiver).

Applying to college? AI will make the assessment of future students (and their contributions to society) seem more reliable, so you may get denied (unless you pay more). Don’t fit the exact criteria for that job? Sorry, the algorithm will trade your potential as an outlier success for the less promising but reliable candidate (or you could take a lower salary).

Pick your profession or activity and there’ll be ways, sooner versus later, to use AI to predict our future actions and decide where we can do and what we can access or do, and what we’re charged for the privilege.

In that Star Trek movie, Captian Kirk is the only person who ever passes the Kobayashi Maru test because he hacks the system and changes the rules.

I don’t need an AI to tell me that he’s probably not going to show up to get us out of this experiment.

AI’s Kobayashi Maru is a no-win situation and we’re stuck on that spaceship that may or may not be saved.

Trust AI, Not One Another

A recent experiment found that an AI chatbot could fare significantly better at convincing people that their nuttiest conspiracy theories might be wrong.

This is good news, and it’s bad news.

The AIs were able to reduce participants’ beliefs in inane theories about aliens, the Illuminati, and other nutjob stories relating to politics and the pandemic. Granted, they didn’t cure them of their afflictions — the study reduced those beliefs “…by 20% on average…” — but even a short step toward sanity should be considered a huge win.

For those of us who’ve tried to talk someone off their ledge of nutty confusion and achieved nothing but a pervasive sense that our species is doomed, the success of the AI is nothing shy of a miracle.

The researchers credit the chatbot’s ability to empathsize and converse politely, as well as its ability to access vast amounts of information in response to whatever data the conspiracists might have shared.

They also said that the test subjects trusted AI (even if they claim not to trust it overall, their interactions proved to be exceptions).

Which brings us to the bad news.

Trust is a central if not the attribute that informs every aspect of our lives. At its core is our ability to believe one another, whether a neighbor, politican, scientist, or business leader and which, in turn, has as a primary driver our willingness to see that those others are more similar than not to us.

We can and should confirm with facts that our trust in others is warranted, but if we have no a priori confidence that they operate by the same rules and desires (and suffer the same imperfections) as we do, no amount of details will suffice.

Ultimately, trust isn’t earned, it’s bestowed.

Once we’ve lost our ability or willingness to grant it, our ability to judge what’s real and what’s not goes out the window, too, as we cast about for a substitute for what we no longer believe is true. And it’s a fool’s errand, since we can’t look outside of ourselves to replace what we’ve lost internally (or what we believe motivates others).

Not surprisingly, we increasingly don’t trust one another anymore. We saw it vividly during the pandemic when people turned against one another, but the malaise has been consistent and broad.

Just about a third of us believe that scientists act in the public’s best interests. Trust in government is near a half-century low (The Economist reports that Americans’ trust in our institutions has collapsed). A “trust gap” has emerged between business leaders and their employees, consumers, and other stakeholders.

Enter AI.

You’ve probably heard about the importance of trust in adopting smart tech. After all, who’s going to let a car drive itself if it can’t be trusted to do so responsibly and reliably? Ditto for letting AIs make stock trades, pen legal briefs, write homework assignments, or make promising romantic matches.

We’ve been conditioned to assume that such trust is achievable, and many of us already grant in certan cases under the assumption, perhaps unconscious, that technology doesn’t have biases, ulterior motives, or show up for work with a hangover or bad attitude.

Trust is a matter of proper coding. We can be confident that AI can be more trustworthy than people.

Only this isn’t true. No amount of regulation can ensure that AIs won’t exhibit some bias of its makers, nor that they won’t develop their own warped opinions (when AIs make shit up, we call it “hallucinating” instead of lying). We’ve already seen AIs come up with their own intentions and find devious ways to accomplish their goals.

The premise that an AI would make the “right” decisions in even the most complex and challenging moments is not based in fact but rather in belief, starting with the premise that everybody impacted by such decisions could agree on what “right” even means.

No, our trust in what AI can become is inexorably linked to our distrust in who we already are. One is a substitute for the other.

We bestow that faith because of our misconception that it has or will earn it. Our belief is helped along by a loud chorus of promoters that feeds the sentiment that even though it will never be perfect, we should trust (or ignore) its shortcomings instead of accepting and living with our own.

Sounds like a conspiracy to me. Who or what is going to talk us out of it?

[9/17/24 UPDATE] Here’s a brief description of a world in which we rely on AI because we can’t trust ourselves or one another.

The Head Fake of AI Regulation

There’s lots going on with AI regulation. The EU AI Act went live last month, the US, UK, and EU will sign-on to a treaty on AI later this week, and an AI bill is in the final stages of review in California.

It’s all a head fake, and here are three reasons why:

First, most of it will be unenforceable. The language is filled with codes, guidelines, frameworks, principles, values, innovations, and just about any other buzzwords that have vague meanings and inscruitable applications.
For instance, the international AI treaty will require that signatory countries “adopt or maintain appropriate legislative, administrative or other measures” to enforce it.

Huh?

The EU comes closest to providing enforcement details, having established an AI Office earlier this year that will possess the authority to conduct evaluations, require information, and apply sanctions if AI developers run afoul of one of the Act’s risk framework.

But the complexity, speed, and distributed nature of where and when that development occurs will likely make it impossible for the AI Office to stay on top of it. Yesterday’s infractions will become today’s standards.

The proposed rules in Calfornia come the closest to having teeth — like mandating safety testing for AI models that cost more than $100 to develop, perhaps thinking that investment correlates with the size of expected real-world impacts — but folks who stand to make the most money from those investments are actively trying to nix such provisions.

Mostly, and perhaps California included, legislators don’t really want to get in the way of AI development, as all of their blather includes promises that they’ll avoid limitations or burdens on AI innovation.

Consider the rules “regulation adjacent.”

Second, AI regulation of potential risks blindly buys into promised benefits.
If you believe what the regulators claim, AI will be something better than the Second Coming. The EU’s expectations are immense:

“…better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management.”

So, how will governments help make sure those benefits happen? After all, the risks of AI are unnecessary if they don’t materialize.

We saw how this will play out with the advent of the Internet.

Its advocates made similar promises about problem solving and improving the Public Good, while “expert” evangelists waxed poetic about virtual town squares and the merits of unfettered access to infinite information.

What did we end up with?

A massive surveillance and exploitation tool that makes its operators filthy rich by stoking anger and division. Sullen teens staring at their phones in failed searches for themselves. A global marketing machine that sells everything faster, better, and for the highest possible prices at any given moment.

Each of us now pays for using what is effectively an inescapable necessity and a public utility.

It didn’t have to end up this way. Goverments could have taken a different approach to regulating and encouraging tech development so that more of the Internet’s promised benefits came to fruition. Other profit models would have emerged from different goals and constraints, so its innovators would have still gotten filthy rich.

We didn’t know better then, maybe. But we sure know better now.

Not.

Third, AI regulations don’t regulate the tech’s greatest peril.

It would be fair to characterize most AI rules as focused on ensuring that AI doesn’t violate the rules that already apply to human beings (like lying, cheating, stealing, stalking, etc.). If AI operates without bias or otherwise avoids treating users unequally, governments will have done their job.

But what happens if those rules work?

I’m not talking about the promises of uptopia but rather the ways properly functioning AIs will reshape our lives and the world.

What happens when millions of jobs go away? What about when AIs become more present and insightful than our closest human friends? What agency will be possess when our systems, and their owners, know our intentions before we know them consciously and can nudge us toward or away from them?

Sure, there are academics here and there talking about such things but there’s no urgency or teeth to their pronouncements. My suspicion is that this is because they’ve bought into the inevitability of AI and are usually funded in large part by the folks who’ll get rich from it.

Where are the bold, multi-disciplinary debates and action plans to address the transformation that will come with AI? Probably on the same to-do list as the global response to climate change.

Meetings, pronouncements, and then…nothing, except a phenomenon that will continue to evolve and grow without us doing much of anything about it.

It’s all a head fake.

Meet The New AI Boss

Since LLMs are only as good as the data on which they’re based, it should be no surprise that they can function properly and still be biased and wrong.

A story by Kevin Roose in the New York Times illustrates this conundrum: When he asked various generative AIs about himself, he got results that accused him of being dishonest, and said that his writing often elevated sensationalism over analysis.

Granted, some of his work might truly stink, but did it warrant such vitriolic labels? He suspected that the problem was deeper, and that it went back to an article he wrote a year ago, along with others’ reactions to it.

That story recounted his interactions with a new Microsoft chatbot named “Sydney,” during which he was shocked by the tech’s ability, both demonstrated and suggested, to influence users.

What he found particularly creepy was when Syndey declared that it loved him and tried to convince him to leave his wife. It also fantisized about doing bad things, and stated “I want to be alive.”

The two-hour chat was so strange that Roose reported having trouble sleeping afterward.

Lots of other media outlets picked up his story and his concerns (like this one), while Microsoft issued typically unconvincing corporate PR blather about the interaction being a valuable “part of the learning process.”

Since generative AIs regularly scrape the Internet for data to train their LLMs, it’s no surprise that the stories got incorporated into the models and patterns chatbots use to suss out meaning.

It’s exactly what happened with Internet search, which swapped the biases of informed elites judging content with the biases of uninformed mobs and gave us a world understood through popularity instead of expertise.

No, what’s particularly weird is that the AIs reached pejorative conclusions about Roose that went far beyond the substance or volume of what he said, or what was said about his encounter with Sydney.

Like they had it out for him.

There are no good explanations for how this is happening. The transfomers that constitute the systems of chatbot minds work in mysterious ways.

But, like Internet search, there are ways to game the system, the simplest being generating and then strategicially placing stories intended to change what AIs see and learn. This can include putting weird code on webpages, understandable only to machines, and coloring it white so it isn’t distracting to mere mortal visitors.

It’s called “AIO,” for A.I. Optimization, echoing a similar buzzword for manipulating Internet searches (“SEO”). Just wait until those optimized AI results get matched with corporate sponsors.

It’ll be Machiavelli meets the madness of crowds.

In the meantime, it raises fascinating questions about how deserving AIs are of our trust, and to what degree we should depend on it for our decision-making and understanding of the world.

What happens if that otherwise perfectly operating AI reaches conclusions and voice opinions that are no more objectively true than the informed judgments of those elites we so readily threw in the garbage years ago (or the inanity of crowdsourced information that replaced them)?

Meet the new boss, the same as the old boss.

We will get fooled again.

Prove You’re Not An AI

A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones.

Such “Personhood Credentials,” or PHCs, would help people protect themselves from privacy and security threats, not to mention the proliferation of falsehoods online, that will almost certainly come from a tidal wave of bots that get ever-better at impersonating people.

Call it at Turing test for people.

Of course, whatever the august body of researchers come up with won’t be as onerous as a multiple-choice questionnarie; PHCs will probably rely on some cryptobrilliant tech that works behind the scenes, and finds proof of who we say we are in the cloud (or something).

I’m not convinced it’ll work, or that it’s intended to work on the problem it claims to address.

PHCs probably won’t work or work consistently, for starters, because they’ll always be in a race with computers that get better at hacking security and pretending that they’re humans. The big money, both in investments and potential profits, will be on the hackers and imposters.

Even though they don’t exist yet, the future security threats of quantum computing are so real that the US government has already issued standards to combat capabilities that they imagine hackers might have in, say, a decade, because when they are invented, those bad guys will be able to retroactively decrypt data.

Think about that for a moment, Mr. Serling.

Now, imagine the betting odds on correctly identifying what criminal or crime-adjacent quantum tech might emerge sometime after 2030. There’s a very good chance that today’s PHCs will be tomorrow’s laserdiscs.

Add to that the vast amounts of smarts and money working on inventing AGI, or Artificial General Intelligence that can not just mimic human cognition but possess something equal or better. At least half of a huge sampling of AI experts concluded in 2021 that we’d have such computers by the late 2050s, and that wait time has shortened each time they’ve been canvassed.

What’ll be the use of a credential for personhood if an AGI-capable computer can legitimately claim it?

And then there are other implactions for PHCs that may also be part of an ulterior purpose.

If they do get put into general use, they will never be used consistently. Some folks will neglect to comply or fail to qualify. Some computers will do a good enough job to get them, perhaps with the aid of human accomplices.

Just think of the the complexities and nuisance people already experience trying to resolve existing online identity problems, credit card thefts, and medical bill issues. PHCs could make us look back fondly on them.

Anybody who claims that such innanities couldn’t happen because some inherent quality of technology will prohibit it, whether extant or planned, is either a liar or a fool. Centuries of tech innovation have taught us that we should always consider the worst things some new gizmo might deliver, not just the best tones.

Never say never.

Plus, a side-effect of making online users prove that they’re human will become a litmus test for accessing services, sort of like CAPTCHA only on steriods. Doing so will also make the data marketers capture on us more reliable. It’ll also make it easier to surveil us.

After all, what’s the point of monitoring someone if you can’t be entirely sure that they’re someone worth monitoring?

This is where my tinfoil hat worries seep into my thinking: What if the point of PHCs is to obliterate whatever remaining vestiges of anonymity we possess?

I’ll leave you with a final thought:

We human beings have done a pretty good job of lying, cheating, and otherwise being untruthful with one another since long before the Internet. History is filled with stories of scams based on people pretending to be someone or something they’re not.

Conversely, there’s this assumption underlying technology development and use that it’s somehow more trustworthy, perhaps because machines have no biases or personal agendas beyond those that are inflicted on them by their creators. This is why there’s so much talk about removing those influences from AI.

If we can build reliably agnostic devices, they’ll treat us more fairly than we treat one another.

So, maybe we need PHCs not to identify who we want to interact with, but to warn us away from who we want to avoid?

AI’s Latest Hack: Biocomputing

AI researchers are fooling around with using living cells as computer chips. One company is even renting compute time on a platform that runs on human brain organoids.

It’s worth talking about, even if the technology lurks at the haziest end of an industry already obscured by vaporware.

Biocomputing is based on the fact that nature is filled with computing power. Molecules perform tasks and organize into systems that keep living things alive and responsive to their surroundings. Human brains are complex computers, or so goes the analogy, but intelligence of varying sorts is everywhere, even built into the chemistry and physics on which all things living or intert rely.

A company called FinalSpark announced earlier this month that it is using little snippets of human brain cells (called organoids) to process data. It’s renting access to the technology and live streaming its operation, and claims that the organic processors use a fraction of the energy consumed by artificial hardware.

But it gets weirder: In order to get the organoids to do their bidding, FinalSpark has to feed them dopamine as positive reinforcement. And the bits of brain matter only live for about 100 days.

This stuff raises at least a few questions, most of which are far more interesting than whether or not the tech works.

For starters, where do the brain cells come from? Donations? Fresh cadavers? Maybe they’re nth generation cells that have never known a world beyond Petri dish.

The idea that they have to be coaxed into their labors with a neuotransmitter that literally makes them feel good hints at some awareness of their existence, even vaguely. If they can feel pleasure, can they experience pain?

At what point does consciousness arise?

We can’t explain the how, where, or why of consciousness in fully-formed human beings. So, even if the clumps of brain matter are operating as simple logic gates, who’s to say that some subjective sense of “maybe” might emerge in them along the way?

The smarts and systems of nature is still an emergent and fascinating field. Integrative thinking about how ants build colonies, trees care for their seedlings, and octupi think with their tentacles are just hints of what we could learn about intelligence, and perhaps thereafter adapt to improve our own condition.

But human brain cells given three months to live while sentenced to servitude calculating chatbot queries?

Don’t Fall In Love With Your AI

You’re probably going to break up with your smart assistant. Your future life partner has just arrived.

OpenAI’s new ChatGPT comes with a lifelike voice mode that can talk as naturally and fast as a human, throw out the occasional “umms” and “ahhs” for effect, and read people’s emotions from selfies.

The company says the new tech comes with “novel risks” that could negatively impact “healthy relationships” because users get emotionally attached to their AIs. According to The Hill:

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

Where will that investigation take place? Your life. And, by the time there’s any conclusive evidence of benefit or harm, it’ll be too late to do anything about it.

This is cool and frightening stuff.

For those of us I/O nerds, the challenge of interacting with machines is a never-ending process of finding easier, faster, and more accurate ways to get data into devices, get it processed into something usable, and then push it out so that folks can use it.

Having spend far too many hours waiting for furniture-sized computers to batch process my punchcards, the promise of using voice interaction to break down the barriers between man and machine is thrilling. The idea that a smart device could anticipate my needs and intentions is even more amazing.

It’s also totally scary.

The key word in OpenAI’s promising and threatening announcement (they do it all the time, BTW) is dependence, as The Hill quotes:

“[The tech can create] both a compelling product experience and the potential for over-reliance and dependence.”

Centuries of empirical data on drug use proves that making AI better and easier to use is going to get it used more often and make it harder to stop using. There’s no need for “continued investigation.” A ChatGPT that listens and talks like your new best friend has been designed to be addictive.

Dependence isn’t a bug, it’s a feature.

About the same time OpenAI announced its new talking AI, JPMorgan Chase rolled out out a generative AI assistant to “tens of thousands of its employees” as “more than 60,000 employees” are already using it.

You can imagine that JPMorgan Chase isn’t the only company embracing the tech, or that it won’t benefit from using its most articulate versions.

Just think…an I/O that enalbes us to feed our AI friends more data and rely on them more often to do things for us until we can’t function without them…or until they have learned enough to function without us.

Falling in love with your AI may well break your heart.

Bias In AI

The latest headlines about AI are a reminder that most egregious biases relating to AI are held by the people talking about it.

AI will improve the world. It may destroy it. My favorites are presumably thoughtful positions that say it’s a little bit of both, and therefore demands additional thoughtful positions.

The news last week was dour on AI transforming businesses fast enough, so the stock markets reacted with a big selloff. Prior news of AI’s transformative promise got the EU’s bureaucracy to react with lots of regulations.

There’s a biased interest behind all of it.

I know, that sounds like I’m afflicted, too, with something called “intentionality bias” or, worse, a penchant for tin foil hats.

Most things that happen in life aren’t the result of a conspiracy or ulterior motive. But some things are, and I think what we’re told about AI is one of them.

When someone building AI comments about its potential to do great harm, they’re also promoting its promise and pumping the value of their business. Investors who deride AI are looking to make money when tech stocks fall. Academics who blather about the complexities of AI are interested in more funding to pay for more blather, and management consultants who talk about those complexities hope to make money promising to resolve them. Bureaucrats tend to build more bureaucracy after claiming to foster innovation (or whatever).

There’s no better poster child for the inanity of believing in some online “public square” whereat ideas are fairly vetted and conclusions intelligently reached.

The conversation about AI is a battle of biases, and its winners are determined by the size of their megaphones and the time of day.

It’s too bad, because we’d all benefit from a truly honest and robust dialogue, most notably when it comes to making decisions about if, where, and when we want AI to play a role in our lives.

But we’re not being empowered to ask important questions or get reliable answers. The conversation about AI sees us as users, potential victims, and always the data-generating fodder for its continued rollout.

The ultimate bias of AI is against us.

Sympathy For AI

AI’s promoters have filled our minds with breathless promises of wonder that may or may not ever come true, transforming our adoption from reasoned decisions into acts of faith.

This article from The Atlantic a few weeks ago says it’s because we’re struggling to answer the fundamental question at the heart of every conversation about AI:

“How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present?”

It goes on to explain:

“The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.”

In the interim, which most of us would recognize as the here and now of our lives, we’re left bouncing around fantasies of utopia and fears of annhiliation while obliging AI’s developers with the tacit obedience of our data and patience.

Our confusion and inability to accurately assess AI are features, not bugs.

The idea that AI is a matter of faith seems to contradict what we’d assume are the merits of relying on tech instead of theology to explain ourselves and our world.

Technology is tangible and depends on the rigors of objectively observed and endlessly repeatable proofs. It explains through demonstration that requires us to keep our eyes open to see its outcomes, not close them and imagine its revelations.

When it comes to AI, this recent study from the University of Chicago’s business school showed just such “an inverse relationship between automation and religiosity,” citing use of AI tools as a possible cause for broad declines in people identifying with organized religions.

In it, the researchers are quoted saying:

“Historically, people have deferred to supernatural agents and religious professionals to solve instrumental problems beyond the scope of human ability,” they write. “These problems may seem more solvable for people working and living in highly automated spaces.”

So, nobody fully understands how AI works, even the coders of today’s LLMs, and yet we are told to trust its output and intentions? Sure sounds like we’re swapping one faith for another, not abandoning faith altogether.

What’s left for us to do is pull back this curtain and reexamine that fundamental question at the heart of AI, perhaps best articulated by the Rolling Stones:

Pleased to meet you, hope you guess my name.

Ah, what’s puzzlin’ you is the nature of my game.

AIs At The Next Office Party

How can you make sure your new AI worker will drink the company Kool-Aid like the human employee it replaced did?

An HR software company has the answer: Creating profiles for AI workers in its management platform so that employers can track things like performance, productivity, and suitability for promotion.

It’s glorious PR hype, obviously, but the financial payback of using AI in busines is based on effectively replacing human employees with robots, if not precluding their hiring in the first place. Right now, that payback is measured in broad statistics, and it has been reported that businesses are finding it hard to point to results specific to using AI.

A tool that tracks AIs as if they were any other employees might make measuring that payback more precise?

Take Piper, a bot focused on website leads and sales. Looking past the math of replacing a 3-person team, the new management tool could track its day-to-day activities and contrast its sales success (increased simply by its ability to talk to more than one customer at a time, 24/7) over its costs (other than electricity, it demands very little). Its training and development could occur in real-time as it peformed its job, too.

How about Devin, the AI engineer that designs apps in a fraction of the time it took the human team that used to have that job. The platform could measure its respose rate on requests for inter-departmental help (immediate) and speed at fixing or otherwise addressing coding bugs (also immediate). Train it with a dose of civility and it could win higher marks on customer satisfaction.

It’s weird that the AIs mentioned on the HR site are named and have profile pictures — I think they’re all third-party offerings — but personifying robots as people makes them less threatening as their faceless true selves. The likelihood that future generations of children will be named ChatGPT is kinda low, but its competitors, and many of the companies using LLMs, are using name names for their AIs (well, kinda sorta).

It’s a short leap to further personifying them and then watching them work via the HR platform.

The software maker notes on its website that “we” are facing lots of challenges, like whether or not AIs “share our values” and what they mean for jobs for us and our children.

Other than mentioning that its platform can also track “onboarding,” which must include all of the corporate blather anyone who has ever gotten a job has had to endure (and would be a nanosecond’s worth of time inputting code for AI staffers), the company explains its solution to the challenges:

“We need to employ AI as responsibly as we employ people, and to empower everyone to thrive working together. We must navigate the rise of the digital worker with transparency, accountability, and the success of people at the center.”

I won’t parse the convoluted PR prose here, but suffice to say three things:

First, it perpetuates the lie that AIs and people will “work together,” which may be true in very few instances but evidences the latter helping to train the former in most every other.

Second, it presumes that replacing people with AIs is inevitable, which is one of those self-fulfilling prophecies that technologists give us as excuses for their enrichment.

Third, it suggests that transparency and accountability can enable successful navigation of this transformation, when the only people who succeed at it will be the makers of AI and the corporate leaders who control it (at least until AIs replace them, too).

Plus, it means that the office holiday party will be even more awful and boring, though management will save money on the catering budget.

But that’s all right since you won’t be there.