Trust AI, Not One Another

A recent experiment found that an AI chatbot could fare significantly better at convincing people that their nuttiest conspiracy theories might be wrong.

This is good news, and it’s bad news.

The researchers were able to reduce participants’ beliefs in inane theories about aliens, the Illuminati, and other nutjob stories relating to politics and the pandemic. Granted, they didn’t cure them of their afflictions — the study reduced those beliefs “…by 20% on average…” — but even a short step toward sanity should be considered a huge win.

For those of us who’ve tried to talk someone off their ledge of nutty confusion and achieved nothing but a pervasive sense that our species is doomed, the success of the AI is nothing shy of a miracle.

The researchers credit the chatbot’s ability to empathsize and converse politely, as well as its ability to access vast amounts of information in response to whatever data the conspiracists might have shared.

They also said that the test subjects trusted AI (even if they claim not to trust it overall, their interactions proved to be exceptions).

Which brings us to the bad news.

Trust is a central if not the attribute that informs every aspect of our lives. At its core is our ability to trust one another, whether a neighbor, politican, scientist, or business leader and which, in turn, has as a primary driver our willingness to believe that those others are more similar than not to us.

We can and should confirm with facts that our trust in others is warranted, but if we have no a priori confidence that they operate by the same rules and desires (and suffer the same imperfections) as we do, no amount of details will suffice.

Ultimately, trust isn’t earned, it’s bestowed.

Once we’ve lost our ability or willingness to grant it, our ability to judge what’s real and what’s not goes out the window, too, as we cast about for a substitute for what we no longer believe is true. And it’s a fool’s errand, since we can’t look outside of ourselves to replace what we’ve lost internally (or what we believe motivates others).

Not surprisingly, we increasingly don’t trust one another anymore. We saw it vividly during the pandemic when people turned against one another, but the malaise has been consistent and broad.

Just about a third of us believe that scientists act in the public’s best interests. Trust in government is near a half-century low (The Economist reports that Americans’ trust in our institutions has collapsed). A “trust gap” has emerged between business leaders and their employees, consumers, and other stakeholders.

Enter AI.

You’ve probably heard about the importance of trust in adopting smart tech. After all, who’s going to let a car drive itself if it can’t be trusted to do so responsibly and reliably? Ditto for letting AIs make stock trades, pen legal briefs, write homework assignments, or make promising romantic matches.

We’ve been conditioned to assume that such trust is achievable, and many of us already grant in certan cases under the assumption, perhaps unconscious, that technology doesn’t have biases, ulterior motives, or show up for work with a hangover or bad attitude.

Trust is a matter of proper coding. We can be confident that AI can be more trustworthy than people.

Only this isn’t true. No amount of regulation can ensure that AIs won’t exhibit some bias of its makers, nor that they won’t develop their own warped opinions (when AIs make shit up, we call it “hallucinating” instead of lying). We’ve already seen AIs come up with their own intentions and find devious ways to accomplish their goals.

The premise that an AI would make the “right” decisions in even the most complex and challenging moments is not based in fact but rather in belief, starting with the premise that everybody impacted by such decisions could agree on what “right” even means.

No, our trust in what AI can become is inexorably linked to our distrust in who we already are. One is a substitute for the other.

We bestow that faith because of our misconception that it has or will earn it. Our belief is helped along by a loud chorus of promoters that feeds the sentiment that even though it will never be perfect, we should trust (or ignore) its shortcomings instead of accepting and living with our own.

Sounds like a conspiracy to me. Who or what is going to talk us out of it?

The Head Fake of AI Regulation

There’s lots going on with AI regulation. The EU AI Act went live last month, the US, UK, and EU will sign-on to a treaty on AI later this week, and an AI bill is in the final stages of review in California.

It’s all a head fake, and here are three reasons why:

First, most of it will be unenforceable. The language is filled with codes, guidelines, frameworks, principles, values, innovations, and just about any other buzzwords that have vague meanings and inscruitable applications.
For instance, the international AI treaty will require that signatory countries “adopt or maintain appropriate legislative, administrative or other measures” to enforce it.

Huh?

The EU comes closest to providing enforcement details, having established an AI Office earlier this year that will possess the authority to conduct evaluations, require information, and apply sanctions if AI developers run afoul of one of the Act’s risk framework.

But the complexity, speed, and distributed nature of where and when that development occurs will likely make it impossible for the AI Office to stay on top of it. Yesterday’s infractions will become today’s standards.

The proposed rules in Calfornia come the closest to having teeth — like mandating safety testing for AI models that cost more than $100 to develop, perhaps thinking that investment correlates with the size of expected real-world impacts — but folks who stand to make the most money from those investments are actively trying to nix such provisions.

Mostly, and perhaps California included, legislators don’t really want to get in the way of AI development, as all of their blather includes promises that they’ll avoid limitations or burdens on AI innovation.

Consider the rules “regulation adjacent.”

Second, AI regulation of potential risks blindly buys into promised benefits.
If you believe what the regulators claim, AI will be something better than the Second Coming. The EU’s expectations are immense:

“…better healthcare, safer and cleaner transport, and improved public services for citizens. It brings innovative products and services, particularly in energy, security, and healthcare, as well as higher productivity and more efficient manufacturing for businesses, while governments can benefit from cheaper and more sustainable services such as transport, energy and waste management.”

So, how will governments help make sure those benefits happen? After all, the risks of AI are unnecessary if they don’t materialize.

We saw how this will play out with the advent of the Internet.

Its advocates made similar promises about problem solving and improving the Public Good, while “expert” evangelists waxed poetic about virtual town squares and the merits of unfettered access to infinite information.

What did we end up with?

A massive surveillance and exploitation tool that makes its operators filthy rich by stoking anger and division. Sullen teens staring at their phones in failed searches for themselves. A global marketing machine that sells everything faster, better, and for the highest possible prices at any given moment.

Each of us now pays for using what is effectively an inescapable necessity and a public utility.

It didn’t have to end up this way. Goverments could have taken a different approach to regulating and encouraging tech development so that more of the Internet’s promised benefits came to fruition. Other profit models would have emerged from different goals and constraints, so its innovators would have still gotten filthy rich.

We didn’t know better then, maybe. But we sure know better now.

Not.

Third, AI regulations don’t regulate the tech’s greatest peril.

It would be fair to characterize most AI rules as focused on ensuring that AI doesn’t violate the rules that already apply to human beings (like lying, cheating, stealing, stalking, etc.). If AI operates without bias or otherwise avoids treating users unequally, governments will have done their job.

But what happens if those rules work?

I’m not talking about the promises of uptopia but rather the ways properly functioning AIs will reshape our lives and the world.

What happens when millions of jobs go away? What about when AIs become more present and insightful than our closest human friends? What agency will be possess when our systems, and their owners, know our intentions before we know them consciously and can nudge us toward or away from them?

Sure, there are academics here and there talking about such things but there’s no urgency or teeth to their pronouncements. My suspicion is that this is because they’ve bought into the inevitability of AI and are usually funded in large part by the folks who’ll get rich from it.

Where are the bold, multi-disciplinary debates and action plans to address the transformation that will come with AI? Probably on the same to-do list as the global response to climate change.

Meetings, pronouncements, and then…nothing, except a phenomenon that will continue to evolve and grow without us doing much of anything about it.

It’s all a head fake.

Meet The New AI Boss

Since LLMs are only as good as the data on which they’re based, it should be no surprise that they can function properly and still be biased and wrong.

A story by Kevin Roose in the New York Times illustrates this conundrum: When he asked various generative AIs about himself, he got results that accused him of being dishonest, and said that his writing often elevated sensationalism over analysis.

Granted, some of his work might truly stink, but did it warrant such vitriolic labels? He suspected that the problem was deeper, and that it went back to an article he wrote a year ago, along with others’ reactions to it.

That story recounted his interactions with a new Microsoft chatbot named “Sydney,” during which he was shocked by the tech’s ability, both demonstrated and suggested, to influence users.

What he found particularly creepy was when Syndey declared that it loved him and tried to convince him to leave his wife. It also fantisized about doing bad things, and stated “I want to be alive.”

The two-hour chat was so strange that Roose reported having trouble sleeping afterward.

Lots of other media outlets picked up his story and his concerns (like this one), while Microsoft issued typically unconvincing corporate PR blather about the interaction being a valuable “part of the learning process.”

Since generative AIs regularly scrape the Internet for data to train their LLMs, it’s no surprise that the stories got incorporated into the models and patterns chatbots use to suss out meaning.

It’s exactly what happened with Internet search, which swapped the biases of informed elites judging content with the biases of uninformed mobs and gave us a world understood through popularity instead of expertise.

No, what’s particularly weird is that the AIs reached pejorative conclusions about Roose that went far beyond the substance or volume of what he said, or what was said about his encounter with Sydney.

Like they had it out for him.

There are no good explanations for how this is happening. The transfomers that constitute the systems of chatbot minds work in mysterious ways.

But, like Internet search, there are ways to game the system, the simplest being generating and then strategicially placing stories intended to change what AIs see and learn. This can include putting weird code on webpages, understandable only to machines, and coloring it white so it isn’t distracting to mere mortal visitors.

It’s called “AIO,” for A.I. Optimization, echoing a similar buzzword for manipulating Internet searches (“SEO”). Just wait until those optimized AI results get matched with corporate sponsors.

It’ll be Machiavelli meets the madness of crowds.

In the meantime, it raises fascinating questions about how deserving AIs are of our trust, and to what degree we should depend on it for our decision-making and understanding of the world.

What happens if that otherwise perfectly operating AI reaches conclusions and voice opinions that are no more objectively true than the informed judgments of those elites we so readily threw in the garbage years ago (or the inanity of crowdsourced information that replaced them)?

Meet the new boss, the same as the old boss.

We will get fooled again.

Prove You’re Not An AI

A group of AI research luminaries has declared the need for tools that distinguish human users from artificial ones.

Such “Personhood Credentials,” or PHCs, would help people protect themselves from privacy and security threats, not to mention the proliferation of falsehoods online, that will almost certainly come from a tidal wave of bots that get ever-better at impersonating people.

Call it at Turing test for people.

Of course, whatever the august body of researchers come up with won’t be as onerous as a multiple-choice questionnarie; PHCs will probably rely on some cryptobrilliant tech that works behind the scenes, and finds proof of who we say we are in the cloud (or something).

I’m not convinced it’ll work, or that it’s intended to work on the problem it claims to address.

PHCs probably won’t work or work consistently, for starters, because they’ll always be in a race with computers that get better at hacking security and pretending that they’re humans. The big money, both in investments and potential profits, will be on the hackers and imposters.

Even though they don’t exist yet, the future security threats of quantum computing are so real that the US government has already issued standards to combat capabilities that they imagine hackers might have in, say, a decade, because when they are invented, those bad guys will be able to retroactively decrypt data.

Think about that for a moment, Mr. Serling.

Now, imagine the betting odds on correctly identifying what criminal or crime-adjacent quantum tech might emerge sometime after 2030. There’s a very good chance that today’s PHCs will be tomorrow’s laserdiscs.

Add to that the vast amounts of smarts and money working on inventing AGI, or Artificial General Intelligence that can not just mimic human cognition but possess something equal or better. At least half of a huge sampling of AI experts concluded in 2021 that we’d have such computers by the late 2050s, and that wait time has shortened each time they’ve been canvassed.

What’ll be the use of a credential for personhood if an AGI-capable computer can legitimately claim it?

And then there are other implactions for PHCs that may also be part of an ulterior purpose.

If they do get put into general use, they will never be used consistently. Some folks will neglect to comply or fail to qualify. Some computers will do a good enough job to get them, perhaps with the aid of human accomplices.

Just think of the the complexities and nuisance people already experience trying to resolve existing online identity problems, credit card thefts, and medical bill issues. PHCs could make us look back fondly on them.

Anybody who claims that such innanities couldn’t happen because some inherent quality of technology will prohibit it, whether extant or planned, is either a liar or a fool. Centuries of tech innovation have taught us that we should always consider the worst things some new gizmo might deliver, not just the best tones.

Never say never.

Plus, a side-effect of making online users prove that they’re human will become a litmus test for accessing services, sort of like CAPTCHA only on steriods. Doing so will also make the data marketers capture on us more reliable. It’ll also make it easier to surveil us.

After all, what’s the point of monitoring someone if you can’t be entirely sure that they’re someone worth monitoring?

This is where my tinfoil hat worries seep into my thinking: What if the point of PHCs is to obliterate whatever remaining vestiges of anonymity we possess?

I’ll leave you with a final thought:

We human beings have done a pretty good job of lying, cheating, and otherwise being untruthful with one another since long before the Internet. History is filled with stories of scams based on people pretending to be someone or something they’re not.

Conversely, there’s this assumption underlying technology development and use that it’s somehow more trustworthy, perhaps because machines have no biases or personal agendas beyond those that are inflicted on them by their creators. This is why there’s so much talk about removing those influences from AI.

If we can build reliably agnostic devices, they’ll treat us more fairly than we treat one another.

So, maybe we need PHCs not to identify who we want to interact with, but to warn us away from who we want to avoid?

AI’s Latest Hack: Biocomputing

AI researchers are fooling around with using living cells as computer chips. One company is even renting compute time on a platform that runs on human brain organoids.

It’s worth talking about, even if the technology lurks at the haziest end of an industry already obscured by vaporware.

Biocomputing is based on the fact that nature is filled with computing power. Molecules perform tasks and organize into systems that keep living things alive and responsive to their surroundings. Human brains are complex computers, or so goes the analogy, but intelligence of varying sorts is everywhere, even built into the chemistry and physics on which all things living or intert rely.

A company called FinalSpark announced earlier this month that it is using little snippets of human brain cells (called organoids) to process data. It’s renting access to the technology and live streaming its operation, and claims that the organic processors use a fraction of the energy consumed by artificial hardware.

But it gets weirder: In order to get the organoids to do their bidding, FinalSpark has to feed them dopamine as positive reinforcement. And the bits of brain matter only live for about 100 days.

This stuff raises at least a few questions, most of which are far more interesting than whether or not the tech works.

For starters, where do the brain cells come from? Donations? Fresh cadavers? Maybe they’re nth generation cells that have never known a world beyond Petri dish.

The idea that they have to be coaxed into their labors with a neuotransmitter that literally makes them feel good hints at some awareness of their existence, even vaguely. If they can feel pleasure, can they experience pain?

At what point does consciousness arise?

We can’t explain the how, where, or why of consciousness in fully-formed human beings. So, even if the clumps of brain matter are operating as simple logic gates, who’s to say that some subjective sense of “maybe” might emerge in them along the way?

The smarts and systems of nature is still an emergent and fascinating field. Integrative thinking about how ants build colonies, trees care for their seedlings, and octupi think with their tentacles are just hints of what we could learn about intelligence, and perhaps thereafter adapt to improve our own condition.

But human brain cells given three months to live while sentenced to servitude calculating chatbot queries?

Don’t Fall In Love With Your AI

You’re probably going to break up with your smart assistant. Your future life partner has just arrived.

OpenAI’s new ChatGPT comes with a lifelike voice mode that can talk as naturally and fast as a human, throw out the occasional “umms” and “ahhs” for effect, and read people’s emotions from selfies.

The company says the new tech comes with “novel risks” that could negatively impact “healthy relationships” because users get emotionally attached to their AIs. According to The Hill:

“While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

Where will that investigation take place? Your life. And, by the time there’s any conclusive evidence of benefit or harm, it’ll be too late to do anything about it.

This is cool and frightening stuff.

For those of us I/O nerds, the challenge of interacting with machines is a never-ending process of finding easier, faster, and more accurate ways to get data into devices, get it processed into something usable, and then push it out so that folks can use it.

Having spend far too many hours waiting for furniture-sized computers to batch process my punchcards, the promise of using voice interaction to break down the barriers between man and machine is thrilling. The idea that a smart device could anticipate my needs and intentions is even more amazing.

It’s also totally scary.

The key word in OpenAI’s promising and threatening announcement (they do it all the time, BTW) is dependence, as The Hill quotes:

“[The tech can create] both a compelling product experience and the potential for over-reliance and dependence.”

Centuries of empirical data on drug use proves that making AI better and easier to use is going to get it used more often and make it harder to stop using. There’s no need for “continued investigation.” A ChatGPT that listens and talks like your new best friend has been designed to be addictive.

Dependence isn’t a bug, it’s a feature.

About the same time OpenAI announced its new talking AI, JPMorgan Chase rolled out out a generative AI assistant to “tens of thousands of its employees” as “more than 60,000 employees” are already using it.

You can imagine that JPMorgan Chase isn’t the only company embracing the tech, or that it won’t benefit from using its most articulate versions.

Just think…an I/O that enalbes us to feed our AI friends more data and rely on them more often to do things for us until we can’t function without them…or until they have learned enough to function without us.

Falling in love with your AI may well break your heart.

Bias In AI

The latest headlines about AI are a reminder that most egregious biases relating to AI are held by the people talking about it.

AI will improve the world. It may destroy it. My favorites are presumably thoughtful positions that say it’s a little bit of both, and therefore demands additional thoughtful positions.

The news last week was dour on AI transforming businesses fast enough, so the stock markets reacted with a big selloff. Prior news of AI’s transformative promise got the EU’s bureaucracy to react with lots of regulations.

There’s a biased interest behind all of it.

I know, that sounds like I’m afflicted, too, with something called “intentionality bias” or, worse, a penchant for tin foil hats.

Most things that happen in life aren’t the result of a conspiracy or ulterior motive. But some things are, and I think what we’re told about AI is one of them.

When someone building AI comments about its potential to do great harm, they’re also promoting its promise and pumping the value of their business. Investors who deride AI are looking to make money when tech stocks fall. Academics who blather about the complexities of AI are interested in more funding to pay for more blather, and management consultants who talk about those complexities hope to make money promising to resolve them. Bureaucrats tend to build more bureaucracy after claiming to foster innovation (or whatever).

There’s no better poster child for the inanity of believing in some online “public square” whereat ideas are fairly vetted and conclusions intelligently reached.

The conversation about AI is a battle of biases, and its winners are determined by the size of their megaphones and the time of day.

It’s too bad, because we’d all benefit from a truly honest and robust dialogue, most notably when it comes to making decisions about if, where, and when we want AI to play a role in our lives.

But we’re not being empowered to ask important questions or get reliable answers. The conversation about AI sees us as users, potential victims, and always the data-generating fodder for its continued rollout.

The ultimate bias of AI is against us.

Sympathy For AI

AI’s promoters have filled our minds with breathless promises of wonder that may or may not ever come true, transforming our adoption from reasoned decisions into acts of faith.

This article from The Atlantic a few weeks ago says it’s because we’re struggling to answer the fundamental question at the heart of every conversation about AI:

“How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present?”

It goes on to explain:

“The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.”

In the interim, which most of us would recognize as the here and now of our lives, we’re left bouncing around fantasies of utopia and fears of annhiliation while obliging AI’s developers with the tacit obedience of our data and patience.

Our confusion and inability to accurately assess AI are features, not bugs.

The idea that AI is a matter of faith seems to contradict what we’d assume are the merits of relying on tech instead of theology to explain ourselves and our world.

Technology is tangible and depends on the rigors of objectively observed and endlessly repeatable proofs. It explains through demonstration that requires us to keep our eyes open to see its outcomes, not close them and imagine its revelations.

When it comes to AI, this recent study from the University of Chicago’s business school showed just such “an inverse relationship between automation and religiosity,” citing use of AI tools as a possible cause for broad declines in people identifying with organized religions.

In it, the researchers are quoted saying:

“Historically, people have deferred to supernatural agents and religious professionals to solve instrumental problems beyond the scope of human ability,” they write. “These problems may seem more solvable for people working and living in highly automated spaces.”

So, nobody fully understands how AI works, even the coders of today’s LLMs, and yet we are told to trust its output and intentions? Sure sounds like we’re swapping one faith for another, not abandoning faith altogether.

What’s left for us to do is pull back this curtain and reexamine that fundamental question at the heart of AI, perhaps best articulated by the Rolling Stones:

Pleased to meet you, hope you guess my name.

Ah, what’s puzzlin’ you is the nature of my game.

AIs At The Next Office Party

How can you make sure your new AI worker will drink the company Kool-Aid like the human employee it replaced did?

An HR software company has the answer: Creating profiles for AI workers in its management platform so that employers can track things like performance, productivity, and suitability for promotion.

It’s glorious PR hype, obviously, but the financial payback of using AI in busines is based on effectively replacing human employees with robots, if not precluding their hiring in the first place. Right now, that payback is measured in broad statistics, and it has been reported that businesses are finding it hard to point to results specific to using AI.

A tool that tracks AIs as if they were any other employees might make measuring that payback more precise?

Take Piper, a bot focused on website leads and sales. Looking past the math of replacing a 3-person team, the new management tool could track its day-to-day activities and contrast its sales success (increased simply by its ability to talk to more than one customer at a time, 24/7) over its costs (other than electricity, it demands very little). Its training and development could occur in real-time as it peformed its job, too.

How about Devin, the AI engineer that designs apps in a fraction of the time it took the human team that used to have that job. The platform could measure its respose rate on requests for inter-departmental help (immediate) and speed at fixing or otherwise addressing coding bugs (also immediate). Train it with a dose of civility and it could win higher marks on customer satisfaction.

It’s weird that the AIs mentioned on the HR site are named and have profile pictures — I think they’re all third-party offerings — but personifying robots as people makes them less threatening as their faceless true selves. The likelihood that future generations of children will be named ChatGPT is kinda low, but its competitors, and many of the companies using LLMs, are using name names for their AIs (well, kinda sorta).

It’s a short leap to further personifying them and then watching them work via the HR platform.

The software maker notes on its website that “we” are facing lots of challenges, like whether or not AIs “share our values” and what they mean for jobs for us and our children.

Other than mentioning that its platform can also track “onboarding,” which must include all of the corporate blather anyone who has ever gotten a job has had to endure (and would be a nanosecond’s worth of time inputting code for AI staffers), the company explains its solution to the challenges:

“We need to employ AI as responsibly as we employ people, and to empower everyone to thrive working together. We must navigate the rise of the digital worker with transparency, accountability, and the success of people at the center.”

I won’t parse the convoluted PR prose here, but suffice to say three things:

First, it perpetuates the lie that AIs and people will “work together,” which may be true in very few instances but evidences the latter helping to train the former in most every other.

Second, it presumes that replacing people with AIs is inevitable, which is one of those self-fulfilling prophecies that technologists give us as excuses for their enrichment.

Third, it suggests that transparency and accountability can enable successful navigation of this transformation, when the only people who succeed at it will be the makers of AI and the corporate leaders who control it (at least until AIs replace them, too).

Plus, it means that the office holiday party will be even more awful and boring, though management will save money on the catering budget.

But that’s all right since you won’t be there.

Get Ready For The AI Underworld

Turns out that crime can be automated.

The New Scientist reports that a recent research project asked ChatGPT to rewrite its code so that it could deposit malware on a computer without detection.

“Occasionally, the chatbot realized it was being put to nefarious use and refused to follow the instructions,” according to the story.

Occasionally.

This is wild stuff. The LLMs on host computers were “asked” by software hidden in an email attachment to rename and slightly scramble their structures, and then find email chains in Outlook and compose contextually relevant replies with the original malware file attached.

Rinse and repeat.

The skullduggery wasn’t perfect as there was “about a 50% chance” that the chatbot’s creative changes would not only hide the virus program but render it inoperable.

Are the participating LLMs bad actors? Of course not, we’ll be told by AI experts. Like Jessica Rabbit, they’re not bad, they’re just programmed that way.

Or not.

It’s hard not to see similarities with the ways human criminals are made. There’s no genetic marker for illegal behavior, last time I checked. The life journeys that lead to it are varied and nuanced. Influences and indicators might make it more likely, but they’re not determinant.

Code an LLM to always resist every temptation to commit a crime? I don’t think it’s any more possible that it would be to reliably raise a human being to be an angel. No amount of rules can anticipate the exigencies of every particular experience.

One could imagine LLMs that get particularly good at doing bad things, if not become repeat offenders without human encouragement.

“Hey, that’s a nice hard drive you’ve got there. It would be a shame if something happened to it.”

Protection. Racketeering. Theft. Mayhem for the sake of it. AI criminals lurking across the Internet and anywhere we use a smart device.

An AI Underworld.

The solution, according to the expert quoted in the New Scientist article, is to produce a cadre of white hat LLMs to preempt the bad actors, or catch them after they’ve committed their crimes.

Think Criminal Minds, The AI Version.

Who knows how bad or rampant such give-and-take might get, but one thing’s for certain: There’ll be lots of money to be made by AI developers trying to protect people, businesses, and institutions from the dangers of their creations.

And that, after all, is the point of why they’re giving us AI in the first place.