AI Is A Religion, Not A Technology

Belief is at the core of our relationship with AI, and it informs our hopes and fears for developing and using it.

We believe that it possesses authority, independence, and reliability that exceeds our meager human capabilities. We believe that it should be a constant source of useful counsel in our daily lives. We believe that any problems that it might present are actually shortcomings in ourselves. There’s no bad AI, just bad people.

We believe that it will improve us. All we have to do is trust and use it.

A religion isn’t founded on churches or clergy, but rather on personal faith. It’s an internal thing, something visceral and impossible to explain yet something that individuals find undeniably true and impossible to ignore. The canonical rules, rituals, and convoluted theological arguments come later. 

Religion starts with belief, and we believe in AI.

So what? 

It means that talking about AI as a technology similar to looms or other efficiency tools is incomplete, if not inaccurate. 

Nobody has to believe in the merits of a wrench in order to use it. Faith isn’t required to see a factory assembly line spit out more widgets. Trust in the capabilities of new washing machines was no longer necessary after the first loads were done.

Engineers who develop such technologies don’t rely as much on belief as they do in the knowledge that X functions will yield Y results. It’s physics first, closely followed by economics. The philosophy stuff comes much later and is more color commentary than play-by-play insight.

Governments know how to regulate behaviors and things. Oversight of technologies (like social media, most recently) is limited by a lack of understanding, not that said tech is somehow unknowable. VCs and markets don’t need to understand new technologies as much as have the guts to bet on how others will value its promises. 

AI is different.

We have all assumed that adding AI to our daily lives is a good idea. It is a given, based on faith alone, and therefore the questions we ask of it aren’t if or why but simply what and when. Its overall benefits will always supersede any hiccups we might experience as individuals, presuming we’re even aware of its actions.

The folks developing AI share a similar belief in its overarching promise to improve our lives and the world (and their bank accounts). The revealed truths of data science are unquestionable, so again, the conversations they have focus narrowly on that journey to ultimate enlightenment. Questions of its purpose yield to specifics of its incremental revelation.

And it turns out there is an aspect to AI that is intrinsically unknowable, as LLMs already answer questions they shouldn’t know how to address, teach themselves to do things without prompts, and even give hint to being aware of what they’re doing.

Joke all you want about aged legislators not understanding how to operate their smartphones, but they can’t and will never properly know how to regulate a phenomenon that isn’t just unknowable but will incessantly evolve and adapt.

AI advocates are happy with this outcome, as it allows them to pursue their faith unfettered by empowered doubters. 

We’re missing the point if all we see is a technology that helps us write things or find better deals on shoe prices or, at the other extreme, might fix climate change or annihilate humanity.

AI is something more than just another technology. It’s a belief system that has already changed how we view ourselves and one another, and how we’ll behave moving forward.

Welcome to the flock.

Your New Best Friend Will Be An AI

The world’s leading promoters of AI tech are madly racing to deliver a new generation of smart assistants.

According to Google’s Sundar Pichai as quoted in the Financial Times earlier this month, these intelligent systems, or AI agents, can “show reasoning, planning and memory, are able to ‘think’ multiple steps ahead, and world across software and systems, all to get something done on your behalf.

They won’t just make Siri and Alexa look like stodgy old robots.

They’ll put your best friend out of a job.

Delivering these AI agents requires no difficult tech breakthrough; consumer adoption requires better understanding and reduced latency so that an AI can not only be super smart but immediately present in any circumstance. The proliferation of high speed Wi-Fi and the incessant training of LLMs like ChatGPT make the question about realizing the tech’s potential not one of if, but when.

The CEO of Google’s DeepMind described that potential in the same Financial Times article:

“At any given moment, we are processing a stream of different sensory information, making sense of it and making decisions. Imagine agents that can see and hear what we do, better understand the context we’re in, and respond quickly in conversation…”

Such AI agents would also better understand you by not only participating in your decision-making but studying how you got to those decisions and then what you did thereafter.

They’d exist solely for your benefit and well-being, serving as loyal and ever-vigilant sidekicks that anticipated your every need, engaged with you on any topic, and supported you in making decisions large and small.

No ulterior motives. No problems of their own, no activities prioritized over yours. No need to sleep, eat, or do anything other than keep an electric charge.

Just full-time collaborators, confidants, and co-conspirators.

They’d make our human best friends look like stodgy old humans.

Only the companies racing to deliver AI agents aren’t really being honest about it. The world they describe isn’t just one of individuals using their own personal AIs, but rather one in which all of us are embedded in intelligent systems.

It’s already the case, since every time we use our smartphones, search for something on the Internet, or swipe a card to buy something or get on a bus, we’re sharing information with systems and learn from that data.

But opening up our to an AI that bears witness to our every situation, decision, and behavior sounds like the Holy Grail for interests that want to profit from that data. It’s why the biggest tech companies are spending zillions on it.

It would also drastically change our very definitions of agency and self-hood.

Consider the implications…

Even bluntly dumb technology suggests or nudges us in directions of which we may not be consciously aware. Speed bumps in the road tell us to slow down. Readers in turnstiles tell us to get keycards from our pockets. A fork tells us which end to hold and when to use it.

Technology has intentionality that goes beyond the goals and expectations of its designers, too.

I am uncomfortable leaving my apartment without first checking the weather on my smartphone. Spell check highlights words that it can’t find, thereby moving me to rewrite sentences. Waiting rooms for online sales events require that I find things I can do (that I might not otherwise do) while I leave the connection open.

AI agents “better understanding the context we’re in” will make it harder for us to make a growing list of decisions on our own, even as we may be happy about the convenience of it. It may well encourage us to doubt our own conclusions until we’ve checked with it.

In conversations with other people, it’ll challenge us to separate what ideas or prompts come from us versus our tech tools, or blur the line so we won’t even know the difference. Will people who have better, more expensive AI agents experience better outcomes?

The commercial part makes these questions a lot scarier.

For instance, how will be know that our very personal AI best friends aren’t in some way being influenced by corporate sponsors? Think of all the crap that shows up in Internet search, and now imagine it seamlessly integrated into recommendations delivered in a soothingly friendly voice.

Now think of even your slightest, most seemingly inconsequential decisions getting monetized.

AI agents could become the ultimate influencers.

What about having confidence that your AI best friend will keep your secrets? There may well be settings that let users say “no” to sharing various aspects of their data but the functional promise of the tech requires the opposite. Already, today’s digital assistants train on some level of data collected from all other agents, however “anonymized,” and deliver recommendations based on what they learn.

Further, what happens when they don’t just recommend something but demand it?

It’s not hard to imagine a time when an AI agent could serve as a representative of a business that has an interest in your life. An insurance company that wants you to lose weight could require you to allow your tech to not just make “helpful” suggestions but monitor and report your progress (or lack of it?).

“The low-cal pretzels are a buck cheaper right now,” it could tell you as you were shopping for a snack. Or refuse to add soda pop to your order.

What if your AI agent tells you that your insurance coverage has just gone up when you decided to speed? Think about that sort of information being shared with would-be employers or the government.

And then think about your personal reactions with other people, and how being part of an intelligent system could inform you about things…assessing the veracity of statements or playing out various questions or responses to maximize some desired outcome?

You would never be alone, and nothing you did would be wholly yours.

Like I said, this is the Holy Grail of AI development because embedding us in intelligent systems will integrate our lives into their operation and make our experiences privy to their intentions.

It’ll be sold to us as a harmless convenience. Something that happily sits in our pockets, on our wrists, or in our glasses frames (or wherever).

Your new best friend.