Meta’s supervisory board investigates apparent AI-generated images posted on Instagram and Facebook.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Image credit: Bryce Durbin/TechCrunch

The Oversight Board, Meta’s semi-independent policy council, is turning its attention to how the company’s social platforms are handling candid, AI-generated images. On Tuesday, it announced two separate investigations into how Instagram in India and Facebook in the US handled AI-generated images of public figures when Meta’s systems failed to detect and respond to explicit content. I failed.

In both cases, the sites have now taken down the media. According to an email Meta sent to TechCrunch, the board is not naming the individuals targeted by the AI ​​images “to avoid gender-based harassment.”

The board raises issues about Meta’s moderation decisions. Before approaching the Oversight Board, users must first appeal to Meta about the moderation measure. The board is going to publish its full results and results in future.


Explaining the first case, the board said a user reported AI-generated nudes of a public figure in India on Instagram as pornography. The photo was posted by an account that exclusively posts AI-generated photos of Indian women, and the majority of users who reacted to the photos are based in India.

Meta failed to take down the image after the first report, and the report ticket was automatically closed after 48 hours after the company did not investigate the report further. When the original complainant appealed the decision, the report was again automatically closed without any oversight from Meta. In other words, after two reports, the apparent AI-generated photo remained on Instagram.

The consumer eventually appealed to the board. The company only took action to remove the objectionable content at that point and removed the image for violating its community standards on bullying and harassment.

Another case involved Facebook, where a user posted a candid, AI-generated photo that resembled an American public figure in a group focused on AI creations. In this case, the social network removed the image as it had previously been posted by another user, and Meta added it to the media matching service bank under the category “derogatory sexual photoshop or drawing”. .

When asked by TechCrunch why the board chose a case where a company successfully pulled off a clear AI-generated image, the board said it chooses cases “that involve Meta’s platforms.” They are symptomatic of wider problems.” It added that these cases help the Advisory Board to see the global effectiveness of META’s policy and processes for different topics.

“We know that Meta is faster and more effective in moderating content in some markets and languages ​​than others. With one case from the US and one from India, we want to see if Meta is universal for all women.” is fairly protected,” Oversight Board Co-Chair Hayley Thorning-Schmidt said in a statement.

“The board believes it is important to know whether Meta’s policies and enforcement practices are effective in addressing this issue.”

Deep fake porn and online gender-based violence problem

In recent years some — but not all — generative AI tools have expanded to allow users to create porn. As TechCrunch previously reported, groups like Unstable Diffusion are trying to monetize AI porn with murky ethical lines and biases in the data.

Deep faxes have also become a concern in regions such as India. Last year, a BBC report said that the number of deepfake videos of Indian actresses has increased in recent times. Data shows that women are usually victims of deep fake videos.

Earlier this year, Deputy IT Minister Rajeev Chandrasekhar expressed his displeasure over tech companies’ approach to countering DeepFax.

“If a platform thinks it can get away with not removing deepfake videos, or just maintains a comfortable approach to it, then we have such a platform,” Chandrasekhar said at a press conference at the time. Platforms have the power to protect their citizens by blocking them,” Chandrasekhar said at a press conference at the time.

Although India has considered enacting specific laws on deep counterfeiting, nothing has happened so far.

Although the country has provisions for online reporting of gender-based violence under law, experts note that the process can be painful, and often has little support. In a study published last year, Indian advocacy group IT for Change noted that courts in India need stronger processes to deal with online gender-based violence and should not trivialize these cases.

Aparajita Bharti, co-founder of Quantum Hub, an India-based public policy consulting firm, said there should be restrictions on AI models to prevent them from creating clearly harmful content.

“The main risk of generative AI is that the volume of such content will increase because it is easy to generate such content and with a high degree of sophistication. Therefore, we need to first train AI models to generate such content. Content creation needs to be stopped so that the intent to harm someone is clear in advance. We should also introduce default labeling for easy detection,” Bharti told TechCrunch in an email.

There are currently only a few laws globally that address the production and distribution of pornographic content using AI tools. A handful of US states have laws against deep faxing. Britain introduced a law this week to criminalize the creation of sexually explicit AI-powered images.

Meta’s response and next steps

In response to the oversight board’s concerns, Meta said it removed both pieces of content. However, the social media company did not address the fact that it failed to remove content on Instagram after initial reports from users or how long the content remained on the platform.

Meta said it uses a combination of artificial intelligence and human review to detect sexually suggestive content. The social media giant said it doesn’t recommend this type of content in places like Instagram Explore or Rails Recommendations.

The Oversight Board has invited public comments – with a deadline of April 30 – on the issue of the harm caused by deep fake porn, relevant information on the spread of such content in regions such as the US and India, and addresses Identifies the potential pitfalls of a meta-approach to deployment. Clear image generated by AI.

The board will investigate the cases and public comments and post a decision on the site within a few weeks.

These cases show that major platforms are still dealing with old moderation processes while AI-powered tools have enabled users to create and distribute different types of content quickly and easily. Companies like Meta are experimenting with tools that use AI to generate content, with some attempting to detect such images. In April, the company announced that it would apply “Made with AI” badges to DeepFax if it could detect content using “industry standard AI image indicators” or user disclosures.

However, perpetrators are constantly finding ways to evade these detection systems and post problematic content on social platforms.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment