AI music generation tools face major lawsuits from the recording industry.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The Recording Industry Association of America (RIAA) has just dropped a legal bomb on the world of AI music generation.

The organization has filed a major lawsuit against two prominent AI music generation companies: Snow AI and Uncharted Labs, Inc. (the developer of YouTube). These tools use AI to create highly realistic music from simple text prompts, often mimicking the style of specific artists.

The lawsuit, filed by Universal Music Group, Warner Music Group, and Sony Music Entertainment, alleges that the two companies are illegally training their AI models on copyrighted sound recordings on a large scale.

The lawsuit claims AI-generated music from Listen and YouTube is remarkably similar to copyrighted tracks. In some cases, they allege that these tools are reproducing authentic producer tags and sounds that are indistinguishable from famous recording artists.

Dao? Possibly billions. The RIAA is seeking up to $150,000 in damages for the infringing work.

So, how are audio and video wrong? And what does this mean for the future of AI-generated music?

I got the scoop from Paul Rutzer, founder/CEO of the Marketing AI Institute, on episode 103 of The Artificial Intelligence Show.

Companies are not hiding their actions.

“They're admitting to doing what they're accused of,” Rutzer says. “They're not hiding from the fact that they're doing the things they're being accused of in these cases.”

Udio, for example, released a Statement Defending their point of view:

“Generative AI models, including our music model, learn from examples. Just as students listen to music and study scores, our model has 'heard' and learned from a large collection of recorded music.”

The question is whether what they are doing is actually illegal, or if they believe they can bend existing laws to fit their vision of the future.

Listen and defend the video

As a result, companies argue that their goal is to create an understanding of musical ideas—the basic building blocks of musical expression that are owned by no one.

“We are completely uninterested in reproducing the material in our training set, and in fact, have implemented sophisticated filters and continue to improve them to ensure that our model Copyright Does not reproduce works or artists' voices.” Udio claimed in a post on X..

Suno has also issued statements echoing this sentiment.

Sorry not allowed.

Part of the disconnect here is disruptive innovation from a Silicon Valley perspective.

Rutzer highlighted a Bilawal Sidhu's post on X, former project manager at Google and host of the TED AI show. In it, Sidhu broke down the situation and said:

Otherwise the Silicon Valley rule applies: don't bother asking permission, apologize later. Keep making the case that what you're using is transformative, and then as your startups raise more money, make retroactive licensing deals with parties whose content you've trained on. Just like OpenAI.

Rutzer notes that this approach will continue until the courts decide definitively whether training generative AI models on copyrighted material is considered fair use.

What happens next?

As we await legal clarification, these lawsuits are likely to continue. Rutzer predicts that until a final decision comes — possibly from the U.S. Supreme Court — AI companies will continue to push the boundaries of copyright law.

“It will continue to happen,” says Rutzer. “These AI startups will continue to take everything available on the internet. They will claim fair use and conversion purposes. The people who own this content will sue them for doing so.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment