What RIAA's lawsuit against Udio and Suno means for AI and copyright

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Udio and Suno, despite their names, aren't the hottest new restaurants on the Lower East Side. They're AI startups that let people create impressive real-sounding songs—complete with instrumentation and vocal performances. And on Monday, a group of major record labels filed a lawsuit alleging copyright infringement on an “almost unimaginable scale” and claiming the companies could only do so because they let their AI models Illegally used large amounts of copyrighted music for training.

Both lawsuits add to a growing pile of legal headaches for the AI ​​industry. Some of the most successful firms in the space have trained their models with data obtained through unauthorized scraping of large amounts of information from the Internet. For example, ChatGPT was initially trained on millions of documents collected from links posted on Reddit.

The lawsuits, led by the Recording Industry Association of America (RIAA), deal with music rather than the written word. But like The New York Times’ lawsuit against OpenAI, they could raise a question that could reshape the tech landscape as we know it: Can AI firms take anything they want, turn it into a multibillion-dollar product, and Claiming it's fair use?

This is a key issue to address, because it cuts across all kinds of different industries,” said Paul Fackler, a partner at law firm Meyer Brown who specializes in intellectual property matters.

What are Udio and Suno?

Both Udio and Snow are fairly new, but they've already made a great impact. Snow was launched in December by a Cambridge-based team that previously worked for another AI company, Kenshu. It quickly partnered with Microsoft to integrate Sono with Copilot, Microsoft's AI chatbot.

Udio was only launched that year, having raised millions of dollars from heavyweights in the tech investment world (Andreessen Horowitz) and the music world (Will.i.am and Common, for example). Udio's platform was used by comedian King Willonius to create “BBL Drizzy,” a Drake diss track that went viral after producer Metro Bowman remixed it and released it to the public. Can rap for anyone.

Why is the music industry suing YouTube and Snow?

The RIAA lawsuit uses lofty language to say that the litigation is “about ensuring that copyright continues to inspire human invention and imagination, as it has for centuries.” That sounds good, but ultimately, the incentive it's talking about is money.

The RIAA claims that generative AI threatens the business model of record labels. “Instead of licensing copyrighted sound recordings, potential licensees interested in licensing such recordings for their own purposes could produce an AI-soundalike at virtually no cost,” the lawsuit states. It has been said that such services “[flood] Market with 'copycats' and 'soundalikes', thus driving an established model licensing business.

The RIAA is also seeking $150,000 in damages for the infringing work, a potentially astronomical number, given the vast troves of data typically used to train AI systems. Is.

Does it matter if AI-generated songs sound similar to real songs?

The RIAA's lawsuits included examples of music created with Listen and Audio and comparing their musical notation to existing copyrighted works. In some cases, the created songs contained short phrases that were similar – for example, someone started the line with “Jason Derulo” in the exact same way that the real-life Jason Derulo did in many of his songs. Starts the songs. Others extended a series of similar allusions, as in the case of the track inspired by Green Day's “American Idiot”.

One track began with the sung line “Jason Derulo” in the exact same way that the real-life Jason Derulo opens many of his songs.

This Think Pretty damning, but the RIAA isn't claiming that these particular soundtracks infringe copyright — rather, it's claiming that AI companies use copyrighted music as part of their training data. used on

Neither Listen nor Udio have made their training datasets public. And both firms are vague about the sources of their training data — though that's par for the course in the AI ​​industry. (For example, OpenAI has deflected questions about whether YouTube videos were used to train its Sora video model.)

The RIAA lawsuits note that Udio CEO David Ding has said that the company trains on the “best quality” music that is “publicly available” and that one of Sono's co-founders The official Discord notes that the company “trains with a mix of proprietary and proprietary. Public data.”

Fackler said it was “robust” to include examples and indicative comparisons in the lawsuit, saying it went “beyond” what would be necessary to assert a legal basis for litigation. For one, the labels may not own the composition rights of the songs allegedly consumed by Udio and Suno for training. Rather, they own the copyright to the sound recording, so showing similarity in musical notation does not necessarily help in a copyright dispute. “I think it's really designed for optics for PR purposes,” Fackler said.

On top of that, Fackler noted, it's legal to make audio recordings like vocals if you own the rights to the underlying song.

When reached for comment, a spokesperson for Snow shared a statement from CEO Mickey Shulman saying its technology is “transformational” and that the company does not allow attribution of existing artists' names. Udio did not respond to a request for comment.

Is this fair use?

But even if Udio and Snow used the copyrighted works of record labels to train their models, there's one big question that could override everything else: Is it fair use?

Fair use is a legal defense that allows the use of copyrighted material in the creation of a meaningful new or transformative work. The RIAA argues that the startups cannot claim fair use, saying that Udio and Suno's outputs are intended to replace actual recordings, that they are produced for a commercial purpose, that copying is not selective. was widespread, and ultimately, the result was a direct threat to the product labels' business.

In Fackler's opinion, startups have a solid fair-use argument as long as the copyrighted works were only temporarily copied and their defining characteristics extracted and abstracted into a weighted AI model.

“It's drawing out all these things, just like a musician learns these things by playing music.”

“That's how computers work – it has to make these copies, and the computer is then analyzing all that data to extract copyright-free material,” he said. “How do we make songs that are perceived as music by the listener, and that have different characteristics that we normally find in popular music? It's pulling all of those things out, just like that. Like a musician learns these things by playing music.

“In my mind, that's a very strong fair argument,” Fackler said.

Of course, a judge or jury cannot agree. And what's lost in the discovery process — should the litigation get there — can have a big impact on the case. Which music tracks were taken and how they ended up in the training set matter, and details about the training process can undermine the fair use defense.

We're all in for a long journey as the RIAA lawsuit and similar lawsuits wind their way through the courts. From text and images to now sound recordings, the question of fair use hangs over all of these cases and the AI ​​industry as a whole.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment