Ad giant Ogilvy wants its industry to label “All AI-generated influencer content” to preserve the channel’s “trust and authenticity.”
A year ago it announced it would no longer work with influencers who digitally enhance their images or content.
Ogilvy wants to make sure all the lying it promotes comes from real human beings.
I just don’t get it.
First off, advertisers have relied on images that have been retouched or otherwise manipulated for over a century (the airbrush was invented in 1893). There was nothing trustworthy or authentic about early ads; in fact, they were often overtly stylized and unreal.
Idealized images of women dominated ads in the 1950’s, whether their pictures were mechanically enhanced or the models simply starved themselves.
The manipulation extends to faking objects shown in ads, using glue and motor oil as stand-ins for cheese and syrup. Vance Packard alleged that advertisers added subliminal images, like inserting erotic shots into the shadows of ice cubes (ice being another thing that had to be faked since it melted under photo studio lights).
None of the ads disclosed the charade.
Second, what about the false promises that ads make?
The lies are often obscured behind slogans, so we’re supposed to know that Esso doesn’t really put a tiger in your tank and Red Bull doesn’t put “wiiings” on its drinkers. Lies can be implicit, like when good-looking models tout the benefits of anti-aging cosmetics or skinny millennials gather to chug beer.
An ad can be technically accurate and still imply inaccurate conclusions, like the endorsement of cigarettes by doctors and gum by dentists.
No watermark noting the chicanery required.
And when it comes to using well-known people to influence us, celebrities have always been shilling for products, most often alcohol or cigarettes. “Don’t hate me because I’m beautiful” was one of my favorites.
Why don’t social media influencers already require a big badge that says they’ve been hired by marketers and get paid when people follow their recommendations?
Oh yeah, because they’re human, and that’s just what we do.
How is it that lies generated by AI are worse?
No comment yet, add your voice below!