Your New Best Friend Will Be An AI

The world’s leading promoters of AI tech are madly racing to deliver a new generation of smart assistants.

According to Google’s Sundar Pichai as quoted in the Financial Times earlier this month, these intelligent systems, or AI agents, can “show reasoning, planning and memory, are able to ‘think’ multiple steps ahead, and world across software and systems, all to get something done on your behalf.
”

They won’t just make Siri and Alexa look like stodgy old robots.

They’ll put your best friend out of a job.

Delivering these AI agents requires no difficult tech breakthrough; consumer adoption requires better understanding and reduced latency so that an AI can not only be super smart but immediately present in any circumstance. The proliferation of high speed Wi-Fi and the incessant training of LLMs like ChatGPT make the question about realizing the tech’s potential not one of if, but when.

The CEO of Google’s DeepMind described that potential in the same Financial Times article:

“At any given moment, we are processing a stream of different sensory information, making sense of it and making decisions. Imagine agents that can see and hear what we do, better understand the context we’re in, and respond quickly in conversation…”

Such AI agents would also better understand you by not only participating in your decision-making but studying how you got to those decisions and then what you did thereafter.

They’d exist solely for your benefit and well-being, serving as loyal and ever-vigilant sidekicks that anticipated your every need, engaged with you on any topic, and supported you in making decisions large and small.

No ulterior motives. No problems of their own, no activities prioritized over yours. No need to sleep, eat, or do anything other than keep an electric charge.

Just full-time collaborators, confidants, and co-conspirators.

They’d make our human best friends look like stodgy old humans.

Only the companies racing to deliver AI agents aren’t really being honest about it. The world they describe isn’t just one of individuals using their own personal AIs, but rather one in which all of us are embedded in intelligent systems.

It’s already the case, since every time we use our smartphones, search for something on the Internet, or swipe a card to buy something or get on a bus, we’re sharing information with systems and learn from that data.

But opening up our to an AI that bears witness to our every situation, decision, and behavior sounds like the Holy Grail for interests that want to profit from that data. It’s why the biggest tech companies are spending zillions on it.

It would also drastically change our very definitions of agency and self-hood.

Consider the implications…

Even bluntly dumb technology suggests or nudges us in directions of which we may not be consciously aware. Speed bumps in the road tell us to slow down. Readers in turnstiles tell us to get keycards from our pockets. A fork tells us which end to hold and when to use it.

Technology has intentionality that goes beyond the goals and expectations of its designers, too.

I am uncomfortable leaving my apartment without first checking the weather on my smartphone. Spell check highlights words that it can’t find, thereby moving me to rewrite sentences. Waiting rooms for online sales events require that I find things I can do (that I might not otherwise do) while I leave the connection open.

AI agents “better understanding the context we’re in” will make it harder for us to make a growing list of decisions on our own, even as we may be happy about the convenience of it. It may well encourage us to doubt our own conclusions until we’ve checked with it.

In conversations with other people, it’ll challenge us to separate what ideas or prompts come from us versus our tech tools, or blur the line so we won’t even know the difference. Will people who have better, more expensive AI agents experience better outcomes?

The commercial part makes these questions a lot scarier.

For instance, how will be know that our very personal AI best friends aren’t in some way being influenced by corporate sponsors? Think of all the crap that shows up in Internet search, and now imagine it seamlessly integrated into recommendations delivered in a soothingly friendly voice.

Now think of even your slightest, most seemingly inconsequential decisions getting monetized.

AI agents could become the ultimate influencers.

What about having confidence that your AI best friend will keep your secrets? There may well be settings that let users say “no” to sharing various aspects of their data but the functional promise of the tech requires the opposite. Already, today’s digital assistants train on some level of data collected from all other agents, however “anonymized,” and deliver recommendations based on what they learn.

Further, what happens when they don’t just recommend something but demand it?

It’s not hard to imagine a time when an AI agent could serve as a representative of a business that has an interest in your life. An insurance company that wants you to lose weight could require you to allow your tech to not just make “helpful” suggestions but monitor and report your progress (or lack of it?).

“The low-cal pretzels are a buck cheaper right now,” it could tell you as you were shopping for a snack. Or refuse to add soda pop to your order.

What if your AI agent tells you that your insurance coverage has just gone up when you decided to speed? Think about that sort of information being shared with would-be employers or the government.

And then think about your personal reactions with other people, and how being part of an intelligent system could inform you about things…assessing the veracity of statements or playing out various questions or responses to maximize some desired outcome?

You would never be alone, and nothing you did would be wholly yours.

Like I said, this is the Holy Grail of AI development because embedding us in intelligent systems will integrate our lives into their operation and make our experiences privy to their intentions.

It’ll be sold to us as a harmless convenience. Something that happily sits in our pockets, on our wrists, or in our glasses frames (or wherever).

Your new best friend.

There Aren’t Two Sides To The AI Debate

We have been led into a dead-end debate between two opinions of AI.

One one side, there are the techno-optimists who believe that advancements in AI will usher in a new era of convenience, safety, and prosperity. The opposing side are Neo-Luddites who think that machines will destroy our livelihoods and then our very existence.

AI will either create a New Eden or usher in the Apocalypse. Magical promise or existential threat.

Only there is no debate between these two extremes. In fact, they don’t exist much beyond the machinations of people who have something to gain from casting the debate in such terms.

AI will create loads of benefits and huge risks, and it will do so by drastically remaking our world.

There is no debate about this fact. But there should be a debate about what we’re going to do about it.

AI is already changing how decisions get made and work gets done. Intelligent machines have been replacing human workers at desks and assembly lines for years. The capacities for LLMs to mimic human awareness and reasoning already challenge our conceptions of our own uniqueness.

And yet our public debate rarely gets past declarations about why these changes are good or bad.

Our public policies focus on doomed attempts to ensure that AIs operate “fairly” in specific instances while the unfairness of its widespread use go untouched.

Management consultants issue reports on the massive economic potential for using AI that send the stock market into rapturous swoons.

Governments establish committees and task bureaucrats with applying a light touch to AI development so as to not stifle innovation.

Academics blather mostly nonsense about AI, their jobs dependent on the very companies that stand to make money from its use.

AI experts pop up periodically to warn of impending doom as they shrug away their responsibility for it (and raise more cash to ensure it).

Where’s the international debate about where we’re going to get all of that electricity that AIs will consume? Where are the national Manhattan Projects devising ways to preclude or ameliorate the massive job displacements that are central to AI’s success? Why isn’t every organized religion encouraging their leaders and flocks to actively reexamine their spirituality?

The debate about AI should include all of these voices and focus on what we will get, and what we will have to pay, as use of the technology gets more common and consistent.

Those impacts are certain. So, there’s no need to love AI or hate it because there’s no debate.

Every day that passes in which we don’t acknowledge this truth surrenders the conversation to parties who see personal gain in casting it as a debate between two irreconcilable extremes.

And that’s the true existential threat.