AIs At The Next Office Party

How can you make sure your new AI worker will drink the company Kool-Aid like the human employee it replaced did?

An HR software company has the answer: Creating profiles for AI workers in its management platform so that employers can track things like performance, productivity, and suitability for promotion.

It’s glorious PR hype, obviously, but the financial payback of using AI in busines is based on effectively replacing human employees with robots, if not precluding their hiring in the first place. Right now, that payback is measured in broad statistics, and it has been reported that businesses are finding it hard to point to results specific to using AI.

A tool that tracks AIs as if they were any other employees might make measuring that payback more precise?

Take Piper, a bot focused on website leads and sales. Looking past the math of replacing a 3-person team, the new management tool could track its day-to-day activities and contrast its sales success (increased simply by its ability to talk to more than one customer at a time, 24/7) over its costs (other than electricity, it demands very little). Its training and development could occur in real-time as it peformed its job, too.

How about Devin, the AI engineer that designs apps in a fraction of the time it took the human team that used to have that job. The platform could measure its respose rate on requests for inter-departmental help (immediate) and speed at fixing or otherwise addressing coding bugs (also immediate). Train it with a dose of civility and it could win higher marks on customer satisfaction.

It’s weird that the AIs mentioned on the HR site are named and have profile pictures — I think they’re all third-party offerings — but personifying robots as people makes them less threatening as their faceless true selves. The likelihood that future generations of children will be named ChatGPT is kinda low, but its competitors, and many of the companies using LLMs, are using name names for their AIs (well, kinda sorta).

It’s a short leap to further personifying them and then watching them work via the HR platform.

The software maker notes on its website that “we” are facing lots of challenges, like whether or not AIs “share our values” and what they mean for jobs for us and our children.

Other than mentioning that its platform can also track “onboarding,” which must include all of the corporate blather anyone who has ever gotten a job has had to endure (and would be a nanosecond’s worth of time inputting code for AI staffers), the company explains its solution to the challenges:

“We need to employ AI as responsibly as we employ people, and to empower everyone to thrive working together. We must navigate the rise of the digital worker with transparency, accountability, and the success of people at the center.”

I won’t parse the convoluted PR prose here, but suffice to say three things:

First, it perpetuates the lie that AIs and people will “work together,” which may be true in very few instances but evidences the latter helping to train the former in most every other.

Second, it presumes that replacing people with AIs is inevitable, which is one of those self-fulfilling prophecies that technologists give us as excuses for their enrichment.

Third, it suggests that transparency and accountability can enable successful navigation of this transformation, when the only people who succeed at it will be the makers of AI and the corporate leaders who control it (at least until AIs replace them, too).

Plus, it means that the office holiday party will be even more awful and boring, though management will save money on the catering budget.

But that’s all right since you won’t be there.

Get Ready For The AI Underworld

Turns out that crime can be automated.

The New Scientist reports that a recent research project asked ChatGPT to rewrite its code so that it could deposit malware on a computer without detection.

“Occasionally, the chatbot realized it was being put to nefarious use and refused to follow the instructions,” according to the story.

Occasionally.

This is wild stuff. The LLMs on host computers were “asked” by software hidden in an email attachment to rename and slightly scramble their structures, and then find email chains in Outlook and compose contextually relevant replies with the original malware file attached.

Rinse and repeat.

The skullduggery wasn’t perfect as there was “about a 50% chance” that the chatbot’s creative changes would not only hide the virus program but render it inoperable.

Are the participating LLMs bad actors? Of course not, we’ll be told by AI experts. Like Jessica Rabbit, they’re not bad, they’re just programmed that way.

Or not.

It’s hard not to see similarities with the ways human criminals are made. There’s no genetic marker for illegal behavior, last time I checked. The life journeys that lead to it are varied and nuanced. Influences and indicators might make it more likely, but they’re not determinant.

Code an LLM to always resist every temptation to commit a crime? I don’t think it’s any more possible that it would be to reliably raise a human being to be an angel. No amount of rules can anticipate the exigencies of every particular experience.

One could imagine LLMs that get particularly good at doing bad things, if not become repeat offenders without human encouragement.

“Hey, that’s a nice hard drive you’ve got there. It would be a shame if something happened to it.”

Protection. Racketeering. Theft. Mayhem for the sake of it. AI criminals lurking across the Internet and anywhere we use a smart device.

An AI Underworld.

The solution, according to the expert quoted in the New Scientist article, is to produce a cadre of white hat LLMs to preempt the bad actors, or catch them after they’ve committed their crimes.

Think Criminal Minds, The AI Version.

Who knows how bad or rampant such give-and-take might get, but one thing’s for certain: There’ll be lots of money to be made by AI developers trying to protect people, businesses, and institutions from the dangers of their creations.

And that, after all, is the point of why they’re giving us AI in the first place.

AI Is A Religion, Not A Technology

Belief is at the core of our relationship with AI, and it informs our hopes and fears for developing and using it.

We believe that it possesses authority, independence, and reliability that exceeds our meager human capabilities. We believe that it should be a constant source of useful counsel in our daily lives. We believe that any problems that it might present are actually shortcomings in ourselves. There’s no bad AI, just bad people.

We believe that it will improve us. All we have to do is trust and use it.

A religion isn’t founded on churches or clergy, but rather on personal faith. It’s an internal thing, something visceral and impossible to explain yet something that individuals find undeniably true and impossible to ignore. The canonical rules, rituals, and convoluted theological arguments come later. 

Religion starts with belief, and we believe in AI.

So what? 

It means that talking about AI as a technology similar to looms or other efficiency tools is incomplete, if not inaccurate. 

Nobody has to believe in the merits of a wrench in order to use it. Faith isn’t required to see a factory assembly line spit out more widgets. Trust in the capabilities of new washing machines was no longer necessary after the first loads were done.

Engineers who develop such technologies don’t rely as much on belief as they do in the knowledge that X functions will yield Y results. It’s physics first, closely followed by economics. The philosophy stuff comes much later and is more color commentary than play-by-play insight.

Governments know how to regulate behaviors and things. Oversight of technologies (like social media, most recently) is limited by a lack of understanding, not that said tech is somehow unknowable. VCs and markets don’t need to understand new technologies as much as have the guts to bet on how others will value its promises. 

AI is different.

We have all assumed that adding AI to our daily lives is a good idea. It is a given, based on faith alone, and therefore the questions we ask of it aren’t if or why but simply what and when. Its overall benefits will always supersede any hiccups we might experience as individuals, presuming we’re even aware of its actions.

The folks developing AI share a similar belief in its overarching promise to improve our lives and the world (and their bank accounts). The revealed truths of data science are unquestionable, so again, the conversations they have focus narrowly on that journey to ultimate enlightenment. Questions of its purpose yield to specifics of its incremental revelation.

And it turns out there is an aspect to AI that is intrinsically unknowable, as LLMs already answer questions they shouldn’t know how to address, teach themselves to do things without prompts, and even give hint to being aware of what they’re doing.

Joke all you want about aged legislators not understanding how to operate their smartphones, but they can’t and will never properly know how to regulate a phenomenon that isn’t just unknowable but will incessantly evolve and adapt.

AI advocates are happy with this outcome, as it allows them to pursue their faith unfettered by empowered doubters. 

We’re missing the point if all we see is a technology that helps us write things or find better deals on shoe prices or, at the other extreme, might fix climate change or annihilate humanity.

AI is something more than just another technology. It’s a belief system that has already changed how we view ourselves and one another, and how we’ll behave moving forward.

Welcome to the flock.

Your New Best Friend Will Be An AI

The world’s leading promoters of AI tech are madly racing to deliver a new generation of smart assistants.

According to Google’s Sundar Pichai as quoted in the Financial Times earlier this month, these intelligent systems, or AI agents, can “show reasoning, planning and memory, are able to ‘think’ multiple steps ahead, and world across software and systems, all to get something done on your behalf.
”

They won’t just make Siri and Alexa look like stodgy old robots.

They’ll put your best friend out of a job.

Delivering these AI agents requires no difficult tech breakthrough; consumer adoption requires better understanding and reduced latency so that an AI can not only be super smart but immediately present in any circumstance. The proliferation of high speed Wi-Fi and the incessant training of LLMs like ChatGPT make the question about realizing the tech’s potential not one of if, but when.

The CEO of Google’s DeepMind described that potential in the same Financial Times article:

“At any given moment, we are processing a stream of different sensory information, making sense of it and making decisions. Imagine agents that can see and hear what we do, better understand the context we’re in, and respond quickly in conversation…”

Such AI agents would also better understand you by not only participating in your decision-making but studying how you got to those decisions and then what you did thereafter.

They’d exist solely for your benefit and well-being, serving as loyal and ever-vigilant sidekicks that anticipated your every need, engaged with you on any topic, and supported you in making decisions large and small.

No ulterior motives. No problems of their own, no activities prioritized over yours. No need to sleep, eat, or do anything other than keep an electric charge.

Just full-time collaborators, confidants, and co-conspirators.

They’d make our human best friends look like stodgy old humans.

Only the companies racing to deliver AI agents aren’t really being honest about it. The world they describe isn’t just one of individuals using their own personal AIs, but rather one in which all of us are embedded in intelligent systems.

It’s already the case, since every time we use our smartphones, search for something on the Internet, or swipe a card to buy something or get on a bus, we’re sharing information with systems and learn from that data.

But opening up our to an AI that bears witness to our every situation, decision, and behavior sounds like the Holy Grail for interests that want to profit from that data. It’s why the biggest tech companies are spending zillions on it.

It would also drastically change our very definitions of agency and self-hood.

Consider the implications…

Even bluntly dumb technology suggests or nudges us in directions of which we may not be consciously aware. Speed bumps in the road tell us to slow down. Readers in turnstiles tell us to get keycards from our pockets. A fork tells us which end to hold and when to use it.

Technology has intentionality that goes beyond the goals and expectations of its designers, too.

I am uncomfortable leaving my apartment without first checking the weather on my smartphone. Spell check highlights words that it can’t find, thereby moving me to rewrite sentences. Waiting rooms for online sales events require that I find things I can do (that I might not otherwise do) while I leave the connection open.

AI agents “better understanding the context we’re in” will make it harder for us to make a growing list of decisions on our own, even as we may be happy about the convenience of it. It may well encourage us to doubt our own conclusions until we’ve checked with it.

In conversations with other people, it’ll challenge us to separate what ideas or prompts come from us versus our tech tools, or blur the line so we won’t even know the difference. Will people who have better, more expensive AI agents experience better outcomes?

The commercial part makes these questions a lot scarier.

For instance, how will be know that our very personal AI best friends aren’t in some way being influenced by corporate sponsors? Think of all the crap that shows up in Internet search, and now imagine it seamlessly integrated into recommendations delivered in a soothingly friendly voice.

Now think of even your slightest, most seemingly inconsequential decisions getting monetized.

AI agents could become the ultimate influencers.

What about having confidence that your AI best friend will keep your secrets? There may well be settings that let users say “no” to sharing various aspects of their data but the functional promise of the tech requires the opposite. Already, today’s digital assistants train on some level of data collected from all other agents, however “anonymized,” and deliver recommendations based on what they learn.

Further, what happens when they don’t just recommend something but demand it?

It’s not hard to imagine a time when an AI agent could serve as a representative of a business that has an interest in your life. An insurance company that wants you to lose weight could require you to allow your tech to not just make “helpful” suggestions but monitor and report your progress (or lack of it?).

“The low-cal pretzels are a buck cheaper right now,” it could tell you as you were shopping for a snack. Or refuse to add soda pop to your order.

What if your AI agent tells you that your insurance coverage has just gone up when you decided to speed? Think about that sort of information being shared with would-be employers or the government.

And then think about your personal reactions with other people, and how being part of an intelligent system could inform you about things…assessing the veracity of statements or playing out various questions or responses to maximize some desired outcome?

You would never be alone, and nothing you did would be wholly yours.

Like I said, this is the Holy Grail of AI development because embedding us in intelligent systems will integrate our lives into their operation and make our experiences privy to their intentions.

It’ll be sold to us as a harmless convenience. Something that happily sits in our pockets, on our wrists, or in our glasses frames (or wherever).

Your new best friend.

There Aren’t Two Sides To The AI Debate

We have been led into a dead-end debate between two opinions of AI.

One one side, there are the techno-optimists who believe that advancements in AI will usher in a new era of convenience, safety, and prosperity. The opposing side are Neo-Luddites who think that machines will destroy our livelihoods and then our very existence.

AI will either create a New Eden or usher in the Apocalypse. Magical promise or existential threat.

Only there is no debate between these two extremes. In fact, they don’t exist much beyond the machinations of people who have something to gain from casting the debate in such terms.

AI will create loads of benefits and huge risks, and it will do so by drastically remaking our world.

There is no debate about this fact. But there should be a debate about what we’re going to do about it.

AI is already changing how decisions get made and work gets done. Intelligent machines have been replacing human workers at desks and assembly lines for years. The capacities for LLMs to mimic human awareness and reasoning already challenge our conceptions of our own uniqueness.

And yet our public debate rarely gets past declarations about why these changes are good or bad.

Our public policies focus on doomed attempts to ensure that AIs operate “fairly” in specific instances while the unfairness of its widespread use go untouched.

Management consultants issue reports on the massive economic potential for using AI that send the stock market into rapturous swoons.

Governments establish committees and task bureaucrats with applying a light touch to AI development so as to not stifle innovation.

Academics blather mostly nonsense about AI, their jobs dependent on the very companies that stand to make money from its use.

AI experts pop up periodically to warn of impending doom as they shrug away their responsibility for it (and raise more cash to ensure it).

Where’s the international debate about where we’re going to get all of that electricity that AIs will consume? Where are the national Manhattan Projects devising ways to preclude or ameliorate the massive job displacements that are central to AI’s success? Why isn’t every organized religion encouraging their leaders and flocks to actively reexamine their spirituality?

The debate about AI should include all of these voices and focus on what we will get, and what we will have to pay, as use of the technology gets more common and consistent.

Those impacts are certain. So, there’s no need to love AI or hate it because there’s no debate.

Every day that passes in which we don’t acknowledge this truth surrenders the conversation to parties who see personal gain in casting it as a debate between two irreconcilable extremes.

And that’s the true existential threat.