Meta turns to AI to protect minors from ‘rape’ on Instagram

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Paris: Metta said Thursday it is developing new tools to protect teen users on its Instagram platform from “sex abuse” scams, which US politicians have accused of harming young people’s mental health. .

The gangs run sex abuse scams by luring people into providing candid photos of themselves and then threatening to leave them in public if they don’t receive money.

Meta said it is testing an AI-powered “nudity prevention” tool that will detect and blur images containing nudity sent to minors on the app’s messaging system.

“This way, the recipient is not exposed to unwanted intimate content and has the choice to view the image or not,” Capucine Tuffier, in charge of child protection at Metta France, told AFP.

Also read: Children are becoming perpetrators rather than victims of sex crimes: police data

The US company said it would also offer advice and safety tips to anyone who sends or receives such messages.

According to officials there, nearly 3,000 young people in the U.S. fell victim to sexual exploitation scams in 2022.

In addition, more than 40 US states began filing lawsuits against Meta in October accusing the company of “profiteering from children’s pain.”

The legal filing alleges that Metta exploited young users by creating a business model designed to get them to spend as much time on the platform as possible, despite harming their health.

– ‘On-Device Machine Learning’ –

Meta announced in January that it would take steps to protect under-18s, including tightening content restrictions and expanding parental control tools.

Also read: The European Union launched an investigation into TikTok over child protection.

The firm said Thursday that the latest tools build on “our long-standing work to protect young people from unwanted or potentially harmful contact.”

“We are testing new features to help protect young people from sexual assault and abuse of intimate images, and to find and interact with young people for potential scammers and criminals,” the company said. making it difficult.”

It added that the “nudity protection” tool uses “on-device machine learning”, a type of artificial intelligence, to analyze images.

The firm, which has also been accused of repeatedly violating the privacy of its users’ data, asserted that it would not have access to the images unless users reported them.

Also read: Predatory risks for children in pursuit of social media fame

Metta said it will also use AI tools to identify accounts posting offensive content and strictly restrict their ability to interact with young users on the platform.

In 2021, whistleblower Francis Hogan, a former engineer at Facebook, publicly disclosed research conducted internally by Meta — then known as Facebook — that showed the company had long was aware of the dangers posed by the platforms to the mental health of the youth.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment