YouTube has added an AI-generated content labeling tool.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Today, YouTube announced a method for creators when their videos contain AI-generated or artificial content.

The checkbox appears in the process of uploading and posting, and requires creators to disclose “altered or artificial” content that appears realistic. This includes things like telling a real person something or doing something they didn’t do. conversion of footage of real events and locations; or showing a “realistic-looking scene” that did not actually happen. Some examples of YouTube offerings are showing a fake storm moving toward a real city or using deep fake voices to narrate a video to a real person.

On the other hand, disclosures won’t be required for things like beauty filters, special effects like background blur, and “obviously unrealistic content” like animation.

YouTube’s AI-generated and synthetic content label requires creators to be honest.
Image: YouTube

In November, YouTube detailed its AI-generated content policy, essentially creating two tiers of rules: stricter rules that protect music labels and artists and looser guidelines for everyone else. Deep fake music, like Drake singing Ice Spice or rapping a song written by someone else, can be taken off an artist’s label if they don’t like it. As part of those rules, YouTube said creators would be required to display AI-generated content but hadn’t outlined how they would do that so far. And if you’re an average person who’s been deeply duped on YouTube, it can be very difficult to pull it off — you’ll have to fill out a privacy request form that the company will review. YouTube didn’t offer much about the process in today’s update, saying it is “continuing to work on updated privacy practices.”

Like other platforms that have introduced AI content labels, YouTube’s feature relies on an honor system — creators must be honest about what appears in their videos. YouTube spokesperson Jack Malone said earlier the edge That was the company Investing in tools to detect AI-generated content, although AI detection software has historically been highly inaccurate.

In its blog post today, YouTube said it may add AI disclosure to videos even if the uploader has not done so themselves, “especially if the altered or artificial content is likely to confuse or mislead people.” able to do.” More prominent labels for sensitive topics like health, elections and finance will also appear on the video itself.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment