Runway trains on YouTube videos without permission

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

AI video generation company Runway found itself in hot water after a leak of internal spreadsheets revealed the company was using YouTube videos for AI training.

document, 404 Accessed by Media.shows that Runway trained its new Gen-3 model by scraping thousands of videos without permission from a variety of sources, including popular YouTube creators, brands, and even pirated movies.

The revelation has sparked a heated debate about the ethics and legality of collecting AI training data.

Runway, a major player in the AI ​​video space with a $1.5 billion valuation and backing from tech giants like Google and NVIDIA, is not alone in this exercise. Recent reports have implicated other tech leaders such as Apple, Salesforce, and Anthropic in similar unauthorized use of YouTube videos for AI training.

What does this mean for the responsible use and adoption of AI?

I got the answer from Paul Rutzer, founder and CEO of the Marketing AI Institute Episode 107 of Artificial Intelligence Show.

The uncomfortable truth about AI training

Rutzer says this situation is an “uncomfortable part” of AI development. While users love tools like Runway, the methods used to create them are raising serious ethical concerns.

“It's not even an ugly thing. It's table stakes. Everyone's doing it.”

The process, as detailed in the leaked document, involves searching for specific types of content on platforms like YouTube, then using those copyrighted videos to train AI models.

This allows AI to mimic very specific styles or techniques, sometimes even imitating individual content creators.

“This is how it works. These models have to learn something, so they learn from human creativity,” says Rutzer.

Legal gray areas

The legal landscape around these methods is the best.

Sharon Tork, an IP attorney, advises. In response to one of Rutzer's LinkedIn posts That big AI companies are “wandering” that they will fail by the time a high court rules on the legality of their training methods, writes:

“'Big AI' is pushing that they will fail until the High Court rules that using copyrighted works to train their models is illegal, if Someone does that.

To me, the lack of transparency about their training methods is telling. Of course they are privately owned entities protecting their assets and shareholders. And slowing down is not a practical option for them.

My view is that they are going as far as IP owners having to enter into licensing agreements with them to protect their works and make money.

As the AI ​​industry grapples with these ethical and legal challenges, a few possible outcomes emerge:

  1. Legal Showdown: A well-resourced organization may decide to take the AI ​​industry to court, potentially reshaping the landscape.
  2. Licensing deals: Big content creators and AI companies could attack licensing deals, similar to how the music industry has adapted to streaming.
  3. Old System: Current practices may continue unchallenged, with small creators deprived of any potential compensation.
  4. Regulatory intervention: Governments can step in to create new frameworks for using AI training data.

This uncomfortable state of affairs can lead to some uncomfortable solutions.

“We all know it's unethical,” says Rutzer. “We all know it's probably illegal. But at the end of the day, it would be a better business decision for these big companies to just accept that it's done and that's it and just a deal. Do it and try to make money in the process instead of trying to sue. Prove them and damages.”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment