Meet The New AI Boss

Since LLMs are only as good as the data on which they’re based, it should be no surprise that they can function properly and still be biased and wrong.

A story by Kevin Roose in the New York Times illustrates this conundrum: When he asked various generative AIs about himself, he got results that accused him of being dishonest, and said that his writing often elevated sensationalism over analysis.

Granted, some of his work might truly stink, but did it warrant such vitriolic labels? He suspected that the problem was deeper, and that it went back to an article he wrote a year ago, along with others’ reactions to it.

That story recounted his interactions with a new Microsoft chatbot named “Sydney,” during which he was shocked by the tech’s ability, both demonstrated and suggested, to influence users.

What he found particularly creepy was when Syndey declared that it loved him and tried to convince him to leave his wife. It also fantisized about doing bad things, and stated “I want to be alive.”

The two-hour chat was so strange that Roose reported having trouble sleeping afterward.

Lots of other media outlets picked up his story and his concerns (like this one), while Microsoft issued typically unconvincing corporate PR blather about the interaction being a valuable “part of the learning process.”

Since generative AIs regularly scrape the Internet for data to train their LLMs, it’s no surprise that the stories got incorporated into the models and patterns chatbots use to suss out meaning.

It’s exactly what happened with Internet search, which swapped the biases of informed elites judging content with the biases of uninformed mobs and gave us a world understood through popularity instead of expertise.

No, what’s particularly weird is that the AIs reached pejorative conclusions about Roose that went far beyond the substance or volume of what he said, or what was said about his encounter with Sydney.

Like they had it out for him.

There are no good explanations for how this is happening. The transfomers that constitute the systems of chatbot minds work in mysterious ways.

But, like Internet search, there are ways to game the system, the simplest being generating and then strategicially placing stories intended to change what AIs see and learn. This can include putting weird code on webpages, understandable only to machines, and coloring it white so it isn’t distracting to mere mortal visitors.

It’s called “AIO,” for A.I. Optimization, echoing a similar buzzword for manipulating Internet searches (“SEO”). Just wait until those optimized AI results get matched with corporate sponsors.

It’ll be Machiavelli meets the madness of crowds.

In the meantime, it raises fascinating questions about how deserving AIs are of our trust, and to what degree we should depend on it for our decision-making and understanding of the world.

What happens if that otherwise perfectly operating AI reaches conclusions and voice opinions that are no more objectively true than the informed judgments of those elites we so readily threw in the garbage years ago (or the inanity of crowdsourced information that replaced them)?

Meet the new boss, the same as the old boss.

We will get fooled again.

Prove You’re Not An AI

A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones.

Such “Personhood Crentials,” or PHCs, would help people protect themselves from privacy and security threats, not to mention the proliferation of falsehoods online, that will almost certainly come from a tidal wave of bots that get ever-better at impersonating people.

Call it at Turing test for people.

Of course, whatever the august body of researchers come up with won’t be as onerous as a multiple-choice questionnarie; PHCs will probably rely on some cryptobrilliant tech that works behind the scenes, and finds proof of who we say we are in the cloud (or something).

I’m not convinced it’ll work, or that it’s intended to work on the problem it claims to address.

PHCs probably won’t work or work consistently, for starters, because they’ll always be in a race with computers that get better at hacking security and pretending that they’re humans. The big money, both in investments and potential profits, will be on the hackers and imposters.

Even though they don’t exist yet, the future security threats of quantum computing are so real that the US government has already issued standards to combat capabilities that they imagine hackers might have in, say, a decade, because when they are invented, those bad guys will be able to retroactively decrypt data.

Think about that for a moment, Mr. Serling.

Now, imagine the betting odds on correctly identifying what criminal or crime-adjacent quantum tech might emerge sometime after 2030. There’s a very good chance that today’s PHCs will be tomorrow’s laserdiscs.

Add to that the vast amounts of smarts and money working on inventing AGI, or Artificial General Intelligence that can not just mimic human cognition but possess something equal or better. At least half of a huge sampling of AI experts concluded in 2021 that we’d have such computers by the late 2050s, and that wait time has shortened each time they’ve been canvassed.

What’ll be the use of a credential for personhood if an AGI-capable computer can legitimately claim it?

And then there are other implactions for PHCs that may also be part of an ulterior purpose.

If they do get put into general use, they will never be used consistently. Some folks will neglect to comply or fail to qualify. Some computers will do a good enough job to get them, perhaps with the aid of human accomplices.

Just think of the the complexities and nuisance people already experience trying to resolve existing online identity problems, credit card thefts, and medical bill issues. PHCs could make us look back fondly on them.

Anybody who claims that such innanities couldn’t happen because some inherent quality of technology will prohibit it, whether extant or planned, is either a liar or a fool. Centuries of tech innovation have taught us that we should always consider the worst things some new gizmo might deliver, not just the best tones.

Never say never.

Plus, a side-effect of making online users prove that they’re human will become a litmus test for accessing services, sort of like CAPTCHA only on steriods. Doing so will also make the data marketers capture on us more reliable. It’ll also make it easier to surveil us.

After all, what’s the point of monitoring someone if you can’t be entirely sure that they’re someone worth monitoring?

This is where my tinfoil hat worries seep into my thinking: What if the point of PHCs is to obliterate whatever remaining vestiges of anonymity we possess?

I’ll leave you with a final thought:

We human beings have done a pretty good job of lying, cheating, and otherwise being untruthful with one another since long before the Internet. History is filled with stories of scams based on people pretending to be someone or something they’re not.

Conversely, there’s this assumption underlying technology development and use that it’s somehow more trustworthy, perhaps because machines have no biases or personal agendas beyond those that are inflicted on them by their creators. This is why there’s so much talk about removing those influences from AI.

If we can build reliably agnostic devices, they’ll treat us more fairly than we treat one another.

So, maybe we need PHCs not to identify who we want to interact with, but to warn us away from who we want to avoid?

AI’s Latest Hack: Biocomputing

AI researchers are fooling around with using living cells as computer chips. One company is even renting compute time on a platform that runs on human brain organoids.

It’s worth talking about, even if the technology lurks at the haziest end of an industry already obscured by vaporware.

Biocomputing is based on the fact that nature is filled with computing power. Molecules perform tasks and organize into systems that keep living things alive and responsive to their surroundings. Human brains are complex computers, or so goes the analogy, but intelligence of varying sorts is everywhere, even built into the chemistry and physics on which all things living or intert rely.

A company called FinalSpark announced earlier this month that it is using little snippets of human brain cells (called organoids) to process data. It’s renting access to the technology and live streaming its operation, and claims that the organic processors use a fraction of the energy consumed by artificial hardware.

But it gets weirder: In order to get the organoids to do their bidding, FinalSpark has to feed them dopamine as positive reinforcement. And the bits of brain matter only live for about 100 days.

This stuff raises at least a few questions, most of which are far more interesting than whether or not the tech works.

For starters, where do the brain cells come from? Donations? Fresh cadavers? Maybe they’re nth generation cells that have never known a world beyond Petri dish.

The idea that they have to be coaxed into their labors with a neuotransmitter that literally makes them feel good hints at some awareness of their existence, even vaguely. If they can feel pleasure, can they experience pain?

At what point does consciousness arise?

We can’t explain the how, where, or why of consciousness in fully-formed human beings. So, even if the clumps of brain matter are operating as simple logic gates, who’s to say that some subjective sense of “maybe” might emerge in them along the way?

The smarts and systems of nature is still an emergent and fascinating field. Integrative thinking about how ants build colonies, trees care for their seedlings, and octupi think with their tentacles are just hints of what we could learn about intelligence, and perhaps thereafter adapt to improve our own condition.

But human brain cells given three months to live while sentenced to servitude calculating chatbot queries?

Don’t Fall In Love With Your AI

You’re probably going to break up with your smart assistant. Your future life partner has just arrived.

OpenAI’s new ChatGPT comes with a lifelike voice mode that can talk as naturally and fast as a human, throw out the occasional “umms” and “ahhs” for effect, and read people’s emotions from selfies.

The company says the new tech comes with “novel risks” that could negatively impact “healthy relationships” because users get emotionally attached to their AIs. According to The Hill:

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

Where will that investigation take place? Your life. And, by the time there’s any conclusive evidence of benefit or harm, it’ll be too late to do anything about it.

This is cool and frightening stuff.

For those of us I/O nerds, the challenge of interacting with machines is a never-ending process of finding easier, faster, and more accurate ways to get data into devices, get it processed into something usable, and then push it out so that folks can use it.

Having spend far too many hours waiting for furniture-sized computers to batch process my punchcards, the promise of using voice interaction to break down the barriers between man and machine is thrilling. The idea that a smart device could anticipate my needs and intentions is even more amazing.

It’s also totally scary.

The key word in OpenAI’s promising and threatening announcement (they do it all the time, BTW) is dependence, as The Hill quotes:

“[The tech can create] both a compelling product experience and the potential for over-reliance and dependence.”

Centuries of empirical data on drug use proves that making AI better and easier to use is going to get it used more often and make it harder to stop using. There’s no need for “continued investigation.” A ChatGPT that listens and talks like your new best friend has been designed to be addictive.

Dependence isn’t a bug, it’s a feature.

About the same time OpenAI announced its new talking AI, JPMorgan Chase rolled out out a generative AI assistant to “tens of thousands of its employees” as “more than 60,000 employees” are already using it.

You can imagine that JPMorgan Chase isn’t the only company embracing the tech, or that it won’t benefit from using its most articulate versions.

Just think…an I/O that enalbes us to feed our AI friends more data and rely on them more often to do things for us until we can’t function without them…or until they have learned enough to function without us.

Falling in love with your AI may well break your heart.

Bias In AI

The latest headlines about AI are a reminder that most egregious biases relating to AI are held by the people talking about it.

AI will improve the world. It may destroy it. My favorites are presumably thoughtful positions that say it’s a little bit of both, and therefore demands additional thoughtful positions.

The news last week was dour on AI transforming businesses fast enough, so the stock markets reacted with a big selloff. Prior news of AI’s transformative promise got the EU’s bureaucracy to react with lots of regulations.

There’s a biased interest behind all of it.

I know, that sounds like I’m afflicted, too, with something called “intentionality bias” or, worse, a penchant for tin foil hats.

Most things that happen in life aren’t the result of a conspiracy or ulterior motive. But some things are, and I think what we’re told about AI is one of them.

When someone building AI comments about its potential to do great harm, they’re also promoting its promise and pumping the value of their business. Investors who deride AI are looking to make money when tech stocks fall. Academics who blather about the complexities of AI are interested in more funding to pay for more blather, and management consultants who talk about those complexities hope to make money promising to resolve them. Bureaucrats tend to build more bureaucracy after claiming to foster innovation (or whatever).

There’s no better poster child for the inanity of believing in some online “public square” whereat ideas are fairly vetted and conclusions intelligently reached.

The conversation about AI is a battle of biases, and its winners are determined by the size of their megaphones and the time of day.

It’s too bad, because we’d all benefit from a truly honest and robust dialogue, most notably when it comes to making decisions about if, where, and when we want AI to play a role in our lives.

But we’re not being empowered to ask important questions or get reliable answers. The conversation about AI sees us as users, potential victims, and always the data-generating fodder for its continued rollout.

The ultimate bias of AI is against us.

Sympathy For AI

AI’s promoters have filled our minds with breathless promises of wonder that may or may not ever come true, transforming our adoption from reasoned decisions into acts of faith.

This article from The Atlantic a few weeks ago says it’s because we’re struggling to answer the fundamental question at the heart of every conversation about AI:

“How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present?”

It goes on to explain:

“The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.”

In the interim, which most of us would recognize as the here and now of our lives, we’re left bouncing around fantasies of utopia and fears of annhiliation while obliging AI’s developers with the tacit obedience of our data and patience.

Our confusion and inability to accurately assess AI are features, not bugs.

The idea that AI is a matter of faith seems to contradict what we’d assume are the merits of relying on tech instead of theology to explain ourselves and our world.

Technology is tangible and depends on the rigors of objectively observed and endlessly repeatable proofs. It explains through demonstration that requires us to keep our eyes open to see its outcomes, not close them and imagine its revelations.

When it comes to AI, this recent study from the University of Chicago’s business school showed just such “an inverse relationship between automation and religiosity,” citing use of AI tools as a possible cause for broad declines in people identifying with organized religions.

In it, the researchers are quoted saying:

“Historically, people have deferred to supernatural agents and religious professionals to solve instrumental problems beyond the scope of human ability,” they write. “These problems may seem more solvable for people working and living in highly automated spaces.”

So, nobody fully understands how AI works, even the coders of today’s LLMs, and yet we are told to trust its output and intentions? Sure sounds like we’re swapping one faith for another, not abandoning faith altogether.

What’s left for us to do is pull back this curtain and reexamine that fundamental question at the heart of AI, perhaps best articulated by the Rolling Stones:

Pleased to meet you, hope you guess my name.

Ah, what’s puzzlin’ you is the nature of my game.

AIs At The Next Office Party

How can you make sure your new AI worker will drink the company Kool-Aid like the human employee it replaced did?

An HR software company has the answer: Creating profiles for AI workers in its management platform so that employers can track things like performance, productivity, and suitability for promotion.

It’s glorious PR hype, obviously, but the financial payback of using AI in busines is based on effectively replacing human employees with robots, if not precluding their hiring in the first place. Right now, that payback is measured in broad statistics, and it has been reported that businesses are finding it hard to point to results specific to using AI.

A tool that tracks AIs as if they were any other employees might make measuring that payback more precise?

Take Piper, a bot focused on website leads and sales. Looking past the math of replacing a 3-person team, the new management tool could track its day-to-day activities and contrast its sales success (increased simply by its ability to talk to more than one customer at a time, 24/7) over its costs (other than electricity, it demands very little). Its training and development could occur in real-time as it peformed its job, too.

How about Devin, the AI engineer that designs apps in a fraction of the time it took the human team that used to have that job. The platform could measure its respose rate on requests for inter-departmental help (immediate) and speed at fixing or otherwise addressing coding bugs (also immediate). Train it with a dose of civility and it could win higher marks on customer satisfaction.

It’s weird that the AIs mentioned on the HR site are named and have profile pictures — I think they’re all third-party offerings — but personifying robots as people makes them less threatening as their faceless true selves. The likelihood that future generations of children will be named ChatGPT is kinda low, but its competitors, and many of the companies using LLMs, are using name names for their AIs (well, kinda sorta).

It’s a short leap to further personifying them and then watching them work via the HR platform.

The software maker notes on its website that “we” are facing lots of challenges, like whether or not AIs “share our values” and what they mean for jobs for us and our children.

Other than mentioning that its platform can also track “onboarding,” which must include all of the corporate blather anyone who has ever gotten a job has had to endure (and would be a nanosecond’s worth of time inputting code for AI staffers), the company explains its solution to the challenges:

“We need to employ AI as responsibly as we employ people, and to empower everyone to thrive working together. We must navigate the rise of the digital worker with transparency, accountability, and the success of people at the center.”

I won’t parse the convoluted PR prose here, but suffice to say three things:

First, it perpetuates the lie that AIs and people will “work together,” which may be true in very few instances but evidences the latter helping to train the former in most every other.

Second, it presumes that replacing people with AIs is inevitable, which is one of those self-fulfilling prophecies that technologists give us as excuses for their enrichment.

Third, it suggests that transparency and accountability can enable successful navigation of this transformation, when the only people who succeed at it will be the makers of AI and the corporate leaders who control it (at least until AIs replace them, too).

Plus, it means that the office holiday party will be even more awful and boring, though management will save money on the catering budget.

But that’s all right since you won’t be there.

Get Ready For The AI Underworld

Turns out that crime can be automated.

The New Scientist reports that a recent research project asked ChatGPT to rewrite its code so that it could deposit malware on a computer without detection.

“Occasionally, the chatbot realized it was being put to nefarious use and refused to follow the instructions,” according to the story.

Occasionally.

This is wild stuff. The LLMs on host computers were “asked” by software hidden in an email attachment to rename and slightly scramble their structures, and then find email chains in Outlook and compose contextually relevant replies with the original malware file attached.

Rinse and repeat.

The skullduggery wasn’t perfect as there was “about a 50% chance” that the chatbot’s creative changes would not only hide the virus program but render it inoperable.

Are the participating LLMs bad actors? Of course not, we’ll be told by AI experts. Like Jessica Rabbit, they’re not bad, they’re just programmed that way.

Or not.

It’s hard not to see similarities with the ways human criminals are made. There’s no genetic marker for illegal behavior, last time I checked. The life journeys that lead to it are varied and nuanced. Influences and indicators might make it more likely, but they’re not determinant.

Code an LLM to always resist every temptation to commit a crime? I don’t think it’s any more possible that it would be to reliably raise a human being to be an angel. No amount of rules can anticipate the exigencies of every particular experience.

One could imagine LLMs that get particularly good at doing bad things, if not become repeat offenders without human encouragement.

“Hey, that’s a nice hard drive you’ve got there. It would be a shame if something happened to it.”

Protection. Racketeering. Theft. Mayhem for the sake of it. AI criminals lurking across the Internet and anywhere we use a smart device.

An AI Underworld.

The solution, according to the expert quoted in the New Scientist article, is to produce a cadre of white hat LLMs to preempt the bad actors, or catch them after they’ve committed their crimes.

Think Criminal Minds, The AI Version.

Who knows how bad or rampant such give-and-take might get, but one thing’s for certain: There’ll be lots of money to be made by AI developers trying to protect people, businesses, and institutions from the dangers of their creations.

And that, after all, is the point of why they’re giving us AI in the first place.

The AI Revolution Is A Boiling Frog

I think we’ll experience the roll out of AI as a similar set of implementations at work and in our homes. The media and VC-fueled startups might still talk about step-changes, like AGI, but that won’t change the substance, speed, or inevitability of the underlying process.

Continue reading

AI Is A Religion, Not A Technology

Belief is at the core of our relationship with AI, and it informs our hopes and fears for developing and using it.

We believe that it possesses authority, independence, and reliability that exceeds our meager human capabilities. We believe that it should be a constant source of useful counsel in our daily lives. We believe that any problems that it might present are actually shortcomings in ourselves. There’s no bad AI, just bad people.

We believe that it will improve us. All we have to do is trust and use it.

A religion isn’t founded on churches or clergy, but rather on personal faith. It’s an internal thing, something visceral and impossible to explain yet something that individuals find undeniably true and impossible to ignore. The canonical rules, rituals, and convoluted theological arguments come later. 

Religion starts with belief, and we believe in AI.

So what? 

It means that talking about AI as a technology similar to looms or other efficiency tools is incomplete, if not inaccurate. 

Nobody has to believe in the merits of a wrench in order to use it. Faith isn’t required to see a factory assembly line spit out more widgets. Trust in the capabilities of new washing machines was no longer necessary after the first loads were done.

Engineers who develop such technologies don’t rely as much on belief as they do in the knowledge that X functions will yield Y results. It’s physics first, closely followed by economics. The philosophy stuff comes much later and is more color commentary than play-by-play insight.

Governments know how to regulate behaviors and things. Oversight of technologies (like social media, most recently) is limited by a lack of understanding, not that said tech is somehow unknowable. VCs and markets don’t need to understand new technologies as much as have the guts to bet on how others will value its promises. 

AI is different.

We have all assumed that adding AI to our daily lives is a good idea. It is a given, based on faith alone, and therefore the questions we ask of it aren’t if or why but simply what and when. Its overall benefits will always supersede any hiccups we might experience as individuals, presuming we’re even aware of its actions.

The folks developing AI share a similar belief in its overarching promise to improve our lives and the world (and their bank accounts). The revealed truths of data science are unquestionable, so again, the conversations they have focus narrowly on that journey to ultimate enlightenment. Questions of its purpose yield to specifics of its incremental revelation.

And it turns out there is an aspect to AI that is intrinsically unknowable, as LLMs already answer questions they shouldn’t know how to address, teach themselves to do things without prompts, and even give hint to being aware of what they’re doing.

Joke all you want about aged legislators not understanding how to operate their smartphones, but they can’t and will never properly know how to regulate a phenomenon that isn’t just unknowable but will incessantly evolve and adapt.

AI advocates are happy with this outcome, as it allows them to pursue their faith unfettered by empowered doubters. 

We’re missing the point if all we see is a technology that helps us write things or find better deals on shoe prices or, at the other extreme, might fix climate change or annihilate humanity.

AI is something more than just another technology. It’s a belief system that has already changed how we view ourselves and one another, and how we’ll behave moving forward.

Welcome to the flock.