Sympathy For AI

AI’s promoters have filled our minds with breathless promises of wonder that may or may not ever come true, transforming our adoption from reasoned decisions into acts of faith.

This article from The Atlantic a few weeks ago says it’s because we’re struggling to answer the fundamental question at the heart of every conversation about AI:

“How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present?”

It goes on to explain:

“The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.”

In the interim, which most of us would recognize as the here and now of our lives, we’re left bouncing around fantasies of utopia and fears of annhiliation while obliging AI’s developers with the tacit obedience of our data and patience.

Our confusion and inability to accurately assess AI are features, not bugs.

The idea that AI is a matter of faith seems to contradict what we’d assume are the merits of relying on tech instead of theology to explain ourselves and our world.

Technology is tangible and depends on the rigors of objectively observed and endlessly repeatable proofs. It explains through demonstration that requires us to keep our eyes open to see its outcomes, not close them and imagine its revelations.

When it comes to AI, this recent study from the University of Chicago’s business school showed just such “an inverse relationship between automation and religiosity,” citing use of AI tools as a possible cause for broad declines in people identifying with organized religions.

In it, the researchers are quoted saying:

“Historically, people have deferred to supernatural agents and religious professionals to solve instrumental problems beyond the scope of human ability,” they write. “These problems may seem more solvable for people working and living in highly automated spaces.”

So, nobody fully understands how AI works, even the coders of today’s LLMs, and yet we are told to trust its output and intentions? Sure sounds like we’re swapping one faith for another, not abandoning faith altogether.

What’s left for us to do is pull back this curtain and reexamine that fundamental question at the heart of AI, perhaps best articulated by the Rolling Stones:

Pleased to meet you, hope you guess my name.

Ah, what’s puzzlin’ you is the nature of my game.

AIs At The Next Office Party

How can you make sure your new AI worker will drink the company Kool-Aid like the human employee it replaced did?

An HR software company has the answer: Creating profiles for AI workers in its management platform so that employers can track things like performance, productivity, and suitability for promotion.

It’s glorious PR hype, obviously, but the financial payback of using AI in busines is based on effectively replacing human employees with robots, if not precluding their hiring in the first place. Right now, that payback is measured in broad statistics, and it has been reported that businesses are finding it hard to point to results specific to using AI.

A tool that tracks AIs as if they were any other employees might make measuring that payback more precise?

Take Piper, a bot focused on website leads and sales. Looking past the math of replacing a 3-person team, the new management tool could track its day-to-day activities and contrast its sales success (increased simply by its ability to talk to more than one customer at a time, 24/7) over its costs (other than electricity, it demands very little). Its training and development could occur in real-time as it peformed its job, too.

How about Devin, the AI engineer that designs apps in a fraction of the time it took the human team that used to have that job. The platform could measure its respose rate on requests for inter-departmental help (immediate) and speed at fixing or otherwise addressing coding bugs (also immediate). Train it with a dose of civility and it could win higher marks on customer satisfaction.

It’s weird that the AIs mentioned on the HR site are named and have profile pictures — I think they’re all third-party offerings — but personifying robots as people makes them less threatening as their faceless true selves. The likelihood that future generations of children will be named ChatGPT is kinda low, but its competitors, and many of the companies using LLMs, are using name names for their AIs (well, kinda sorta).

It’s a short leap to further personifying them and then watching them work via the HR platform.

The software maker notes on its website that “we” are facing lots of challenges, like whether or not AIs “share our values” and what they mean for jobs for us and our children.

Other than mentioning that its platform can also track “onboarding,” which must include all of the corporate blather anyone who has ever gotten a job has had to endure (and would be a nanosecond’s worth of time inputting code for AI staffers), the company explains its solution to the challenges:

“We need to employ AI as responsibly as we employ people, and to empower everyone to thrive working together. We must navigate the rise of the digital worker with transparency, accountability, and the success of people at the center.”

I won’t parse the convoluted PR prose here, but suffice to say three things:

First, it perpetuates the lie that AIs and people will “work together,” which may be true in very few instances but evidences the latter helping to train the former in most every other.

Second, it presumes that replacing people with AIs is inevitable, which is one of those self-fulfilling prophecies that technologists give us as excuses for their enrichment.

Third, it suggests that transparency and accountability can enable successful navigation of this transformation, when the only people who succeed at it will be the makers of AI and the corporate leaders who control it (at least until AIs replace them, too).

Plus, it means that the office holiday party will be even more awful and boring, though management will save money on the catering budget.

But that’s all right since you won’t be there.

Get Ready For The AI Underworld

Turns out that crime can be automated.

The New Scientist reports that a recent research project asked ChatGPT to rewrite its code so that it could deposit malware on a computer without detection.

“Occasionally, the chatbot realized it was being put to nefarious use and refused to follow the instructions,” according to the story.

Occasionally.

This is wild stuff. The LLMs on host computers were “asked” by software hidden in an email attachment to rename and slightly scramble their structures, and then find email chains in Outlook and compose contextually relevant replies with the original malware file attached.

Rinse and repeat.

The skullduggery wasn’t perfect as there was “about a 50% chance” that the chatbot’s creative changes would not only hide the virus program but render it inoperable.

Are the participating LLMs bad actors? Of course not, we’ll be told by AI experts. Like Jessica Rabbit, they’re not bad, they’re just programmed that way.

Or not.

It’s hard not to see similarities with the ways human criminals are made. There’s no genetic marker for illegal behavior, last time I checked. The life journeys that lead to it are varied and nuanced. Influences and indicators might make it more likely, but they’re not determinant.

Code an LLM to always resist every temptation to commit a crime? I don’t think it’s any more possible that it would be to reliably raise a human being to be an angel. No amount of rules can anticipate the exigencies of every particular experience.

One could imagine LLMs that get particularly good at doing bad things, if not become repeat offenders without human encouragement.

“Hey, that’s a nice hard drive you’ve got there. It would be a shame if something happened to it.”

Protection. Racketeering. Theft. Mayhem for the sake of it. AI criminals lurking across the Internet and anywhere we use a smart device.

An AI Underworld.

The solution, according to the expert quoted in the New Scientist article, is to produce a cadre of white hat LLMs to preempt the bad actors, or catch them after they’ve committed their crimes.

Think Criminal Minds, The AI Version.

Who knows how bad or rampant such give-and-take might get, but one thing’s for certain: There’ll be lots of money to be made by AI developers trying to protect people, businesses, and institutions from the dangers of their creations.

And that, after all, is the point of why they’re giving us AI in the first place.

The AI Revolution Is A Boiling Frog

I think we’ll experience the roll out of AI as a similar set of implementations at work and in our homes. The media and VC-fueled startups might still talk about step-changes, like AGI, but that won’t change the substance, speed, or inevitability of the underlying process.

Continue reading

AI Is A Religion, Not A Technology

Belief is at the core of our relationship with AI, and it informs our hopes and fears for developing and using it.

We believe that it possesses authority, independence, and reliability that exceeds our meager human capabilities. We believe that it should be a constant source of useful counsel in our daily lives. We believe that any problems that it might present are actually shortcomings in ourselves. There’s no bad AI, just bad people.

We believe that it will improve us. All we have to do is trust and use it.

A religion isn’t founded on churches or clergy, but rather on personal faith. It’s an internal thing, something visceral and impossible to explain yet something that individuals find undeniably true and impossible to ignore. The canonical rules, rituals, and convoluted theological arguments come later. 

Religion starts with belief, and we believe in AI.

So what? 

It means that talking about AI as a technology similar to looms or other efficiency tools is incomplete, if not inaccurate. 

Nobody has to believe in the merits of a wrench in order to use it. Faith isn’t required to see a factory assembly line spit out more widgets. Trust in the capabilities of new washing machines was no longer necessary after the first loads were done.

Engineers who develop such technologies don’t rely as much on belief as they do in the knowledge that X functions will yield Y results. It’s physics first, closely followed by economics. The philosophy stuff comes much later and is more color commentary than play-by-play insight.

Governments know how to regulate behaviors and things. Oversight of technologies (like social media, most recently) is limited by a lack of understanding, not that said tech is somehow unknowable. VCs and markets don’t need to understand new technologies as much as have the guts to bet on how others will value its promises. 

AI is different.

We have all assumed that adding AI to our daily lives is a good idea. It is a given, based on faith alone, and therefore the questions we ask of it aren’t if or why but simply what and when. Its overall benefits will always supersede any hiccups we might experience as individuals, presuming we’re even aware of its actions.

The folks developing AI share a similar belief in its overarching promise to improve our lives and the world (and their bank accounts). The revealed truths of data science are unquestionable, so again, the conversations they have focus narrowly on that journey to ultimate enlightenment. Questions of its purpose yield to specifics of its incremental revelation.

And it turns out there is an aspect to AI that is intrinsically unknowable, as LLMs already answer questions they shouldn’t know how to address, teach themselves to do things without prompts, and even give hint to being aware of what they’re doing.

Joke all you want about aged legislators not understanding how to operate their smartphones, but they can’t and will never properly know how to regulate a phenomenon that isn’t just unknowable but will incessantly evolve and adapt.

AI advocates are happy with this outcome, as it allows them to pursue their faith unfettered by empowered doubters. 

We’re missing the point if all we see is a technology that helps us write things or find better deals on shoe prices or, at the other extreme, might fix climate change or annihilate humanity.

AI is something more than just another technology. It’s a belief system that has already changed how we view ourselves and one another, and how we’ll behave moving forward.

Welcome to the flock.