Generative AI has the power to cause great harm to musicians’ revenues, says the musician who has a computer-generated version of himself performing every night in London and thereafter sending him revenue checks.

The musician is Björn Ulvaeus, a founding member of the pop quartet Abba, is 79-years-old and relaxes comfortably on his private island outside Stockholm while his 30-ish avatar performs in a specially built concert venue thanks to a specially created technology called ABBAtar.

He was quoted in the Financial Times reacting to a study that projected musicians might lose a fifth of their revenue to AI, primarily because the tech will get ever better at mimicking their work. Abba participated in a lawsuit last year against two AI startups that produced songs that sounded eerily like the originals (“Prancing Queen” cited as one example that probably didn’t prompt a revenue check for Ulvaeus).

The guy is otherwise very bullish on AI, saying it represents the “biggest revolution” ever seen in music, and that it could take artists in “unexpected directions.”

There’s a ton to unpack here, but I’ll just focus on two issues:

First, he’s all for AI if it is obedient and its operators are faithful to the letter and spirit of the law.

Good luck with that.

Regulations and actions have been announced or are in development in hopes of policing AI use, primarily focused on protecting privacy and prohibiting bias. This blather is too little, too late: No amount of bureaucratic oversight can ensure that even the most rudimentary LLM has been coded, used, or learns according to any set of rules.

It’s like teaching a roomful of young kids the difference between right and wrong and then watching them follow those rules as adults, which is really nothing more than make-work for prisons and police departments.

Similarly, AI makers will just get richer trying and failing to create tools to fulfill government’s misplaced hopes of policing their creations.

When it comes to musicians and copyright on their songs, the very premise of copyrighting a pattern of musical notes is unsettled law…in a sense, every song incorporates chords and/or snippets of melodies that have been used before…and we’ve not needed AI to push the hazy limits of this issue up to now.

Consider this: You find a present-day LLM on the Internet that has been trained on popular music and the tenets of musical theory. The model runs on some server hidden behind a litany of geographic and virtual screens, so the cops can’t shut it down. And then you ask it to produce a playlist of “new” Beatles songs, not to sell but simply for your personal enjoyment. Daily, your playlists are filled with songs from your favorite artists that you’ve never heard before.

It won’t just cut into musicians’ revenue. It’ll replace it.

Second, the idea that AI and human musicians can somehow forge partnerships that take music in “unexpected directions” ignores the fundamental premise of AI:

It only moves in directions that have already been taken and therefore is wholly expected.

Current LLMs don’t invent new ideas; rather, they aggregate existing ones and synthesize them into the most likely answers to questions posed to them (within the guidelines set by their programmers).

An AI in the recording studio isn’t an equal collaborating partner but rather an advanced filing system.

So, maybe it could cite how many times a particular chord or transition had been used before, or suggest lyrics that might work with a melody, but it wouldn’t be a composer.

A human composer might use the “wrong” chord for all of the “wrong” but otherwise “right” reasons.  

This raises loads of intriguing questions about the role of technology in the arts more generally, like whether it doesn’t simply exert a normative influence on the extremes of artistic endeavor.

Art created with the assistance of tech tends to look and sound like other art created with the assistance of tech. Throw in the insights tech can provide on potential audience reaction and you get less of an artistic process than a production system.

Will the advent of AI in music unleash human expression or squash it?

We’ll never know, because that genie is already out of the bottle, starting with its less intelligent cousins already mapping song structures, generating sounds, playing percussion, and correcting pitch human singers dare to add something.

As AI gets smarter – and that’s inevitable, even if we can quibble about timing – I fear that music will get dumber. Even more fantastically, what happens when performing avatars decide to generate their own versions of popular songs?

Next thing you know, those AIs will want the revenue checks.

Recommended Posts