AI Deepfakes Are A Distinction Without A Difference

The European Union has demanded that online content faked by AI needs to be labeled as part of its fight against “AI generated disinformation.”

Hell yeah. I want my lies and half-truths produced by real people. Hand-crafted disinformation. With hands.

Technology has been misleading us for a long time.

Story length and placement in newspapers was dictated in large part by the technologies of printing and distribution. Headlines were conceived as analog clickbait, often with a tenuous connection to the truth.

Radio allowed broadcasters to fake baseball sounds as they narrated games they were reading about on a telegraph tape. It let Orson Wells recreate all of the sounds of an on-the-spot radio newscast describing a Martian invasion. The uncontrolled laughter on comedy shows wasn’t real.

Television (and video more broadly) has always been a “funnel” technology, showing us what’s within the frame its cameras can capture, and nothing else. We might see a crowd gathered right in front of a stage in an otherwise empty venue. A snippet of an angry encounter between two people will lack the context of what immediately preceded it or followed.

Visuals are immediate but they are incomplete, which often leads to misunderstanding. It turns out a picture requires a thousand words versus replacing them.

Computers have been delivering fantasies to us for decades.

Spoiler Alert: Leo wasn’t really standing on the deck of the Titanic and the planet Pandora doesn’t exist. Most pop stars can’t sing with perfect pitch, and Internet influencers don’t have sparkly eyes and perfect skin.

It’s all fake, at least somewhat, thanks to the creative and corrective tools of technology. 

The EU isn’t talking about labelling all of these computer-generated falsehoods, of course, so its actions beg the question of what are they really trying to fix?

Because compared to the misinformation produced by technology or AI in particular, we human beings put it to shame.

Very little of what we hear from politicians is wholly true. There’s always a spin or selective omission. Businesses report what they’re required to disclose to regulators while conveniently staying mum on what’s not, or claim to be saving the planet from [insert global catastrophe here]. 

Brand marketers make promises that beer or makeup will make us attractive and reduce the flow of time.

Where’s the labelling for all of this misinformation?

Sadly, it’s not necessary because any reasonable consumer of media already assumes  that everything we hear or see is not entirely true.

Faked content is just another form of false content, and we already bathe in the latter.

Deep fakes of President Biden joining the Shining Path or Paul and Ringo reuniting with dead bandmates for a new album would be no worse or convincing than what is already searchable online.

Are we more uncomfortable with AI producing it than humans?

There’s another insight into the EU’s thinking: a government task force has also decided that customer service chatbots must be clearly identified as AI.

Why? How many of us have interacted with human beings who acted just like robots?

Will the EU require them to behave differently? What if the chatbot can act with more empathy and operational latitude than some minimum-wage human gig worker?

Labelling something as originating from “AI” is kinda insisting on making a distinction without a difference. And it’s not even entirely true, since there’s likely a human being (with an agenda) behind the content.

We should be thankful that the EU even cares about trying to regulate AI, especially since its developers have outsourced responsibility for keeping us safe from the existential risk of their creation to, well, anybody other than themselves.

The US Congress is grossly uninformed on the topic, though its rumored that it will soon introduce its initial thinking on regulating laserdiscs.

But the real threat of AI is how it is changing business, culture, and us.

How about a label that reveals how many human beings were put out of work by AI? Here’s product X that comes with so-and-so number of people rendered obsolete. Where’s the financial reporting disclosure on that impact?

Why not insist on disclosure of energy use of AI, so we could decide if we wanted to buy something that required an extra ton of carbon to get spewed into the atmosphere instead of simply buying a human coder a lunch now and then?

Where’s the testing affirmation of mission-critical AI? Robots already do most of the work flying airplanes but only after intensive training and testing (and they still fail, as evidenced by the MCAS automation that ceased a few Boeing 737 MAXs). 

How are governments certifying the safety of AI inserted into cars or our homes? What about ensuring that they’re not biased when assessing health care claims, deciding college admissions, or auditing tax returns?

Again, we’re used to people doing this work imperfectly, but maybe the point should be to figure out how to make AI an improvement in the Status Quo? 

And where’s the massive, world-wide, shockingly deep and responsible research on how AI will impact our senses of personal identity and well-being? We’re only now seeing the effects of what social media has/is doing to us, and most of it isn’t good (or reversible, but technologists have outsourced responsibility for dealing with that consequence to parents).

There’s so much that governments could do to better frame and oversee what’s happening, but trying to catch deep fakes or forcing AI to reveal itself should be pretty low on the list.

The Internet is already filled with crap and lies. 

Marketer, Meet Your AI Replacement

A recent survey of marketers found that over half of them are currently using generate AI, yet few of them realize that it’s going to put them out of work.

The research, sponsored by Salesforce, revealed that marketers estimated AI would save them over five hours per work week, which adds up to at least a month every year.

Since there were 1,029 respondents to the survey, the savings they cited add up 85 full-time employees, or nearly one in ten of them.

Their primary concern? “Accuracy & Quality.” Two-thirds of them say that their employers aren’t providing enough quality data or AI training for them to fully exploit the technology that will render them unemployed.

Of course, this research is part of a propaganda campaign from Salesforce, which has been selling a sales and marketing automation platform for many years (and quite successfully). Its announcement of the survey findings is filled with blather from its executives touting the transformative potential of AI and, without naming its offering, the importance of its offering. 

As for the details of the marketers’ use of AI, they’re pretty much applying it to producing the content for which they’re paid to produce. They call it “busy work,” oddly, and then go on to say that they think AI will someday soon “transform the way they analyze data, personalize messaging content, and build marketing campaigns,” among other benefits.

This will allow them to “focus on more strategic work,” whatever that means. There’s a reference that many of them think AI lacks “human creativity and contextual knowledge” and will require human oversight, so maybe they think that they’ll get new managerial jobs to watch robots do their old ones.

But that’s just wishful thinking doing the talking.

We don’t know who these marketers are, and any survey is limited to getting answers to the questions it asks. It’s intriguing that most of them are eager to give AI a greater share of their workloads.

But this isn’t research, it’s sales promotion. And if the marketers who responded to the survey are as clueless as they appear, they probably deserve to get replaced by robots.

AI As Frankenstein’s Monster

Google’s former CEO Eric Schmidt has joined Sam Altman, Elon Musk, Geoff Hinton, and a host of other lesser-known experts in sounding the alarm on ‘existential risk’ from AI.

They follow in the footsteps of Victor Frankenstein, the artificial life pioneer who was shocked when he saw the ugliness of his creation over 200 years ago: 

“Now that I had finished, the beauty of the dream vanished, and breathless horror and disgust filled my heart.”

Like his counterparts today, Frankenstein had set out to do something cool, something intellectually challenging. He was in love with the potential for discovery and his own capacity for doing it:

“None but those who have experienced them can conceive of the enticements of science…but in a scientific pursuit there is continual food for discovery and wonder.”

Once he realized that he could create AI, he had a very brief crisis of confidence: 

I doubted at first whether I should attempt the creation of a being like myself, or one of simpler organization; but my imagination was too much exalted by my first success to permit me to doubt of my ability to give life to an animal as complex and wonderful as man…I doubted not that I should ultimately succeed.”

Convinced he was a genius, he decided to unleash his creation on the world in a Victorian-era open source experiment:

“I prepared myself for a multitude of reverses; my operations might be incessantly baffled, and at last my work be imperfect, yet when I considered the improvement which every day takes place in science and mechanics…” 

The parallels between Mary Shelley’s fictional AI inventor and the real ones today are shocking and illustrative.

All of them suffer from hubris and each believes that they are somehow smarter or luckier than everyone else. Or maybe just special, generally speaking.

They try and fail to fix the problems they create with more tech. Frankenstein tries to mollify his creation by building a second creature as its bride but then backs off because he doesn’t want to create a race of super AI. Relying on today’s AI to somehow police itself is equally doomed.

As that approach fails, all of them default to regulation, whether via angry villagers armed with torches or Congressional hearings.

And, throughout it all, they somehow believe that they’re blameless, and that any negative or catastrophic effects of their creations are not their responsibility.

This is because they mistakenly believe that intentions can be ethical even if the outcomes aren’t. While there’s serious philosophical debate over this question, most AI innovators equate ignorance with innocence. 

“Not caring” or “not understanding” is not the same as being “not responsible.”

As I’ve said before, if they’d created COVID and unleashed it on the world, they’d all be in jail.

But since AI can be used for entertainment and companies can fire human workers and use it to answer customer queries, among other profit-making endeavors, any bad outcomes are bugs, not features.

At the end of Shelley’s novel, it’s Frankenstein’s creation that’s overcome with sorrow, not its creator. The human being gets rescued and his AI banishes itself from human society and is never seen or heard from again.

I don’t think today’s AI will be so magnanimous.