Get Ready For The AI Underworld

Turns out that crime can be automated.

The New Scientist reports that a recent research project asked ChatGPT to rewrite its code so that it could deposit malware on a computer without detection.

“Occasionally, the chatbot realized it was being put to nefarious use and refused to follow the instructions,” according to the story.

Occasionally.

This is wild stuff. The LLMs on host computers were “asked” by software hidden in an email attachment to rename and slightly scramble their structures, and then find email chains in Outlook and compose contextually relevant replies with the original malware file attached.

Rinse and repeat.

The skullduggery wasn’t perfect as there was “about a 50% chance” that the chatbot’s creative changes would not only hide the virus program but render it inoperable.

Are the participating LLMs bad actors? Of course not, we’ll be told by AI experts. Like Jessica Rabbit, they’re not bad, they’re just programmed that way.

Or not.

It’s hard not to see similarities with the ways human criminals are made. There’s no genetic marker for illegal behavior, last time I checked. The life journeys that lead to it are varied and nuanced. Influences and indicators might make it more likely, but they’re not determinant.

Code an LLM to always resist every temptation to commit a crime? I don’t think it’s any more possible that it would be to reliably raise a human being to be an angel. No amount of rules can anticipate the exigencies of every particular experience.

One could imagine LLMs that get particularly good at doing bad things, if not become repeat offenders without human encouragement.

“Hey, that’s a nice hard drive you’ve got there. It would be a shame if something happened to it.”

Protection. Racketeering. Theft. Mayhem for the sake of it. AI criminals lurking across the Internet and anywhere we use a smart device.

An AI Underworld.

The solution, according to the expert quoted in the New Scientist article, is to produce a cadre of white hat LLMs to preempt the bad actors, or catch them after they’ve committed their crimes.

Think Criminal Minds, The AI Version.

Who knows how bad or rampant such give-and-take might get, but one thing’s for certain: There’ll be lots of money to be made by AI developers trying to protect people, businesses, and institutions from the dangers of their creations.

And that, after all, is the point of why they’re giving us AI in the first place.

The AI Revolution Is A Boiling Frog

I think we’ll experience the roll out of AI as a similar set of implementations at work and in our homes. The media and VC-fueled startups might still talk about step-changes, like AGI, but that won’t change the substance, speed, or inevitability of the underlying process.

Continue reading

AI Is A Religion, Not A Technology

Belief is at the core of our relationship with AI, and it informs our hopes and fears for developing and using it.

We believe that it possesses authority, independence, and reliability that exceeds our meager human capabilities. We believe that it should be a constant source of useful counsel in our daily lives. We believe that any problems that it might present are actually shortcomings in ourselves. There’s no bad AI, just bad people.

We believe that it will improve us. All we have to do is trust and use it.

A religion isn’t founded on churches or clergy, but rather on personal faith. It’s an internal thing, something visceral and impossible to explain yet something that individuals find undeniably true and impossible to ignore. The canonical rules, rituals, and convoluted theological arguments come later. 

Religion starts with belief, and we believe in AI.

So what? 

It means that talking about AI as a technology similar to looms or other efficiency tools is incomplete, if not inaccurate. 

Nobody has to believe in the merits of a wrench in order to use it. Faith isn’t required to see a factory assembly line spit out more widgets. Trust in the capabilities of new washing machines was no longer necessary after the first loads were done.

Engineers who develop such technologies don’t rely as much on belief as they do in the knowledge that X functions will yield Y results. It’s physics first, closely followed by economics. The philosophy stuff comes much later and is more color commentary than play-by-play insight.

Governments know how to regulate behaviors and things. Oversight of technologies (like social media, most recently) is limited by a lack of understanding, not that said tech is somehow unknowable. VCs and markets don’t need to understand new technologies as much as have the guts to bet on how others will value its promises. 

AI is different.

We have all assumed that adding AI to our daily lives is a good idea. It is a given, based on faith alone, and therefore the questions we ask of it aren’t if or why but simply what and when. Its overall benefits will always supersede any hiccups we might experience as individuals, presuming we’re even aware of its actions.

The folks developing AI share a similar belief in its overarching promise to improve our lives and the world (and their bank accounts). The revealed truths of data science are unquestionable, so again, the conversations they have focus narrowly on that journey to ultimate enlightenment. Questions of its purpose yield to specifics of its incremental revelation.

And it turns out there is an aspect to AI that is intrinsically unknowable, as LLMs already answer questions they shouldn’t know how to address, teach themselves to do things without prompts, and even give hint to being aware of what they’re doing.

Joke all you want about aged legislators not understanding how to operate their smartphones, but they can’t and will never properly know how to regulate a phenomenon that isn’t just unknowable but will incessantly evolve and adapt.

AI advocates are happy with this outcome, as it allows them to pursue their faith unfettered by empowered doubters. 

We’re missing the point if all we see is a technology that helps us write things or find better deals on shoe prices or, at the other extreme, might fix climate change or annihilate humanity.

AI is something more than just another technology. It’s a belief system that has already changed how we view ourselves and one another, and how we’ll behave moving forward.

Welcome to the flock.

Your New Best Friend Will Be An AI

The world’s leading promoters of AI tech are madly racing to deliver a new generation of smart assistants.

According to Google’s Sundar Pichai as quoted in the Financial Times earlier this month, these intelligent systems, or AI agents, can “show reasoning, planning and memory, are able to ‘think’ multiple steps ahead, and world across software and systems, all to get something done on your behalf.
”

They won’t just make Siri and Alexa look like stodgy old robots.

They’ll put your best friend out of a job.

Delivering these AI agents requires no difficult tech breakthrough; consumer adoption requires better understanding and reduced latency so that an AI can not only be super smart but immediately present in any circumstance. The proliferation of high speed Wi-Fi and the incessant training of LLMs like ChatGPT make the question about realizing the tech’s potential not one of if, but when.

The CEO of Google’s DeepMind described that potential in the same Financial Times article:

“At any given moment, we are processing a stream of different sensory information, making sense of it and making decisions. Imagine agents that can see and hear what we do, better understand the context we’re in, and respond quickly in conversation…”

Such AI agents would also better understand you by not only participating in your decision-making but studying how you got to those decisions and then what you did thereafter.

They’d exist solely for your benefit and well-being, serving as loyal and ever-vigilant sidekicks that anticipated your every need, engaged with you on any topic, and supported you in making decisions large and small.

No ulterior motives. No problems of their own, no activities prioritized over yours. No need to sleep, eat, or do anything other than keep an electric charge.

Just full-time collaborators, confidants, and co-conspirators.

They’d make our human best friends look like stodgy old humans.

Only the companies racing to deliver AI agents aren’t really being honest about it. The world they describe isn’t just one of individuals using their own personal AIs, but rather one in which all of us are embedded in intelligent systems.

It’s already the case, since every time we use our smartphones, search for something on the Internet, or swipe a card to buy something or get on a bus, we’re sharing information with systems and learn from that data.

But opening up our to an AI that bears witness to our every situation, decision, and behavior sounds like the Holy Grail for interests that want to profit from that data. It’s why the biggest tech companies are spending zillions on it.

It would also drastically change our very definitions of agency and self-hood.

Consider the implications…

Even bluntly dumb technology suggests or nudges us in directions of which we may not be consciously aware. Speed bumps in the road tell us to slow down. Readers in turnstiles tell us to get keycards from our pockets. A fork tells us which end to hold and when to use it.

Technology has intentionality that goes beyond the goals and expectations of its designers, too.

I am uncomfortable leaving my apartment without first checking the weather on my smartphone. Spell check highlights words that it can’t find, thereby moving me to rewrite sentences. Waiting rooms for online sales events require that I find things I can do (that I might not otherwise do) while I leave the connection open.

AI agents “better understanding the context we’re in” will make it harder for us to make a growing list of decisions on our own, even as we may be happy about the convenience of it. It may well encourage us to doubt our own conclusions until we’ve checked with it.

In conversations with other people, it’ll challenge us to separate what ideas or prompts come from us versus our tech tools, or blur the line so we won’t even know the difference. Will people who have better, more expensive AI agents experience better outcomes?

The commercial part makes these questions a lot scarier.

For instance, how will be know that our very personal AI best friends aren’t in some way being influenced by corporate sponsors? Think of all the crap that shows up in Internet search, and now imagine it seamlessly integrated into recommendations delivered in a soothingly friendly voice.

Now think of even your slightest, most seemingly inconsequential decisions getting monetized.

AI agents could become the ultimate influencers.

What about having confidence that your AI best friend will keep your secrets? There may well be settings that let users say “no” to sharing various aspects of their data but the functional promise of the tech requires the opposite. Already, today’s digital assistants train on some level of data collected from all other agents, however “anonymized,” and deliver recommendations based on what they learn.

Further, what happens when they don’t just recommend something but demand it?

It’s not hard to imagine a time when an AI agent could serve as a representative of a business that has an interest in your life. An insurance company that wants you to lose weight could require you to allow your tech to not just make “helpful” suggestions but monitor and report your progress (or lack of it?).

“The low-cal pretzels are a buck cheaper right now,” it could tell you as you were shopping for a snack. Or refuse to add soda pop to your order.

What if your AI agent tells you that your insurance coverage has just gone up when you decided to speed? Think about that sort of information being shared with would-be employers or the government.

And then think about your personal reactions with other people, and how being part of an intelligent system could inform you about things…assessing the veracity of statements or playing out various questions or responses to maximize some desired outcome?

You would never be alone, and nothing you did would be wholly yours.

Like I said, this is the Holy Grail of AI development because embedding us in intelligent systems will integrate our lives into their operation and make our experiences privy to their intentions.

It’ll be sold to us as a harmless convenience. Something that happily sits in our pockets, on our wrists, or in our glasses frames (or wherever).

Your new best friend.

Apple Ad Pushback Suggests Deeper Unease?

A flurry of statements on social media and a round of articles followed by a corporate mea culpa that will be soon forgotten along with the ad itself. A few hours of attention in another week of AI’s relentless progress. But maybe it touched a nerve.

Continue reading