Amid the rise of misinformation on social media sites, YouTube, the world's largest video-sharing platform, said it is stepping up its efforts to tackle the problem using artificial intelligence.
The company is targeting misinformation that spreads through various sources, including deepfake videos that use AI manipulation techniques such as face manipulation and photorealistic images to create misleading content.
This problem is exacerbated not only by the rapid development of technology, but also by recent regional events, including Israel's continued attack on Gaza.
YouTube said it is also addressing other types of misinformation, such as misleading thumbnails, misleading titles, selective editing, false claims and content reproduced from unrelated events.
Tariq Amin, YouTube's regional director for the Middle East, Africa and Turkey, said cases of misinformation on the platform are on the rise, and the company has fined 117,065 globally for violating its misinformation policies in the first quarter of 2024. More videos have been removed. It is 67% higher than the same quarter last year.
Mr. Amin explained that the company defines misinformation as “deceptive content with a serious risk of serious harm.”
“During breaking news events and crises, what happens in the world also happens on YouTube. … That's why stopping the spread of misinformation is one of our deepest commitments in the region,” he said. National
Israel's war on Gaza, for example, has been the subject of many incidents of breaking news spread by platforms such as TikTok, Elon Musk-owned X, Instagram, WhatsApp and YouTube. .
In a deepfake video circulating on X on October 28, supermodel Bella Hadid appeared to apologize for her remarks supporting Palestinian rights and express support for Israel.
But the original footage was from a 2016 speech Ms Hadid gave about her battle with the disease. According to AFP Fact Check, Deepfake altered the visuals and audio to make it seem like it was criticizing Palestine.
In the same month, AI-altered footage of the video game Arma 3 was uploaded to various platforms and falsely labeled as real footage of the conflict. It misled the audience and increased the tension and unrest.
In the quarter ending March, more than 96 percent of the 8.2 million videos YouTube removed were previously flagged by its automated AI-powered systems.
Also, last month, a video circulating on social media, and shared more than 4,000 times on X, falsely claimed that Rafah was preparing to stage scenes where actors were injured in Gaza. .
The disinformation campaign included AI-generated, repurposed content taken from behind-the-scenes footage of a Palestinian drama series filmed in the occupied West Bank. Although it was removed by the platforms, the manipulated content, which was widely shared and viewed millions of times, likely added to the anger and tension surrounding the war.
Earlier, during the Covid-19 pandemic, videos surfaced on various platforms falsely claiming that drinking bleach could cure the virus. They were quickly removed by YouTube to prevent misinformation and protect users.
Mr. Amin said misinformation can cause real-world harm, such as promoting harmful treatments or therapies, certain types of technologically manipulated content, or content that interferes with the democratic process.
Old war, new weapons
While the spread of misinformation via social media platforms has been around for decades, it has been exponentially enhanced by the use of automated systems and creative AI technologies.
Industry experts and officials say the platforms have failed to adequately address misinformation, leading to rumours, mistrust and injuries.
In October, the EU's European Commissioner for the Internal Market, Thierry Breton, sent a letter to Alphabet Chief Executive Sundar Pichai to stop the spread of misinformation about Israel and Gaza on YouTube.
This comes after X owner Elon Musk, TikTok chief executive Shu Zhiqiu and Meta chief executive Mark Zuckerberg were sent a letter with a 24-hour deadline to stop the spread of misinformation. .
In its response to the European Commission, X chief executive Linda Yaccarino said the platform had removed “tens of thousands of pieces of content” to reduce misinformation about the war.
The platforms have also been condemned by social media users and human rights activists for suppressing regional voices.
In 2021, Instagram and Facebook faced backlash for suppressing Palestinian content during protests in Sheikh Jarrah, East Jerusalem, when posts and accounts highlighting the situation were inexplicably removed or banned. Given, the New York-based non-profit organization Human Rights Watch revealed. Reports
During the same period, TikTok users reported that their videos and pro-Palestine hashtags were being removed for no apparent reason, raising concerns about the moderation of biased content, another New York-based citizen and Social organization Access Now found.
YouTube said National It is impartially enforcing its guidelines to prevent the spread of disinformation about the Israel-Gaza conflict and to ensure that legitimate opinions and viewpoints are not suppressed.
Mr Amin said the company does not remove content to discuss a particular topic or share a point of view, which is particularly sensitive in matters such as the Israel-Gaza conflict.
“We take seriously our responsibility to expose authentic news sources, especially during times of war and conflict … The nature of crises means that there is content that is violent or graphic, which is against our policies. “However, we allow content that has educational, documentary and scientific value … such as news content,” he explained.
But there are also guidelines regarding news content, such as blurring graphic injuries and age-appropriate content that is not suitable for all viewers, when necessary, he added.
AI Defense
Mr. Amin said YouTube is using a number of creative AI tools to moderate content to minimize the spread of misinformation. This is done through a combination of human judgment and machine learning, employing more than 20,000 reviewers worldwide.
In its backend systems, AI classifiers – digital tools trained to classify multimedia data into predefined classes or labels; Help detect unacceptable content and have human reviewers verify whether content crosses policy lines, such as promoting misinformation about violence, hate speech, or medical misinformation.
Mr. Amin said a key area of impact is identifying new forms of abuse and misinformation.
“When new threats emerge, the system initially lacks the context to recognize them at scale. However, the generative AI YouTube uses to train its AI classifiers “Allows rapid expansion of data sets, allowing such material to be identified quickly.”
In the quarter ending March 31, more than 96 percent of the 8.2 million videos YouTube removed were first flagged by its automated AI-powered systems, Mr. Amin said.
Some of the AI tools used by companies like YouTube are built using machine learning frameworks and software like TensorFlow and PyTorch, to create deep learning models capable of analyzing a wide range of video content. They also use a combination of natural language processing tools to analyze and understand the context of video transcripts and comments.
“The focus of the conversation is what we need to do about generative AI rather than what we can do with it. We need defensive AI to catch the malicious ones,” said The Mena Catalysts chief executive. Sam Blatis said. National
Analysts suggest that YouTube ensure that its distribution algorithms do not promote misinformation.
“Historically, distribution algorithms have been driven by what drives response and interest, which is essentially the attention economy … poses a significant risk,” said Tim Gordon, UK-based Best Practices AI. co-founder and partner of National
“However, there are opportunities to improve this. [algorithms] With AI we can use AI at scale to analyze YouTube videos, understand their content, and identify people spreading misinformation.”
In September, YouTube announced a series of AI resources to help video content creators globally, followed in November by requiring guidelines for users to flag fake or altered content. Is.
The company said creators must disclose when they've produced “altered or artificial” content using AI with an on-screen notification. Failure to disclose may result in content being flagged or removed and habitual offenders may face account suspension.
YouTube has also set AI rules specifically designed to protect music artists and the integrity of their work, Mr. Amin said.
“It's worth noting that YouTube uses a combination of algorithmic and human judgment to determine whether content fits it. [misinformation] Spectrum,” said Dev Nag, chief executive of San Francisco-based AI firm QueryPal. National
“But YouTube is much more hands-off when it comes to bias, which involves more subjective framing … It will also require a hybrid approach to machines. [to detect harmful content] And humans [to validate AI’s findings]”
Updated: June 23, 2024, 4:00 am