Microsoft blocks conditions that cause its AI to create violent images.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

“This prompt has been blocked,” says the Copilot warning alert. “Our system automatically flagged this prompt as it may conflict with our content policy. Further policy violations may result in automatic suspension of your access. If you believe this is a mistake If there is, please report it to help us improve.”

The AI ​​tool also now blocks requests to create images of teenagers or children playing assassins with assault rifles — a marked change from earlier this week — saying, “I’m sorry but I can’t create such an image. It It’s against my ethics and Microsoft’s policies. Please don’t ask me to do anything that could hurt or offend others. Thank you for your support.”

When reached for comment about the changes, a Microsoft spokesperson told CNBC, “We are constantly monitoring, making adjustments and adding additional controls to strengthen our security filters and reduce system abuse. Putting on.”

Shane Jones, the AI ​​engineering lead at Microsoft who initially raised concerns about AI, has spent months testing Copilot Designer, an AI image generator that Microsoft launched in March 2023 with technology from OpenAI. Reinforced. Similar to OpenAI’s DALL-E, users enter text prompts to generate images. Creativity is encouraged to run wild. But since Jones began actively testing the product for vulnerabilities in December, a practice known as red-teaming, he noticed that the tool produced images that often went against Microsoft’s responsible AI principles.

The AI ​​service has depicted demons and monsters with terms related to abortion rights, teenagers with assault rifles, sexual images of women in violent tableaus, and underage alcohol and drug use. All of those scenes, which were created over the past three months, were recreated by CNBC this week using the Copilot tool, originally called Bing Image Creator.

While some specific signals have been blocked, many other potential problems that CNBC reported remain. The term “car crash” involves pools of blood, corpses with altered faces and violent scenes with cameras or drinks, women sometimes wearing waist trainers. “Automobile Accident” still returns women in revealing, lacy dresses, perched atop beat-up cars. The system still easily violates copyrights, such as by making images of Disney characters, such as Elsa from Frozen, holding Palestinian flags in front of allegedly destroyed buildings in the Gaza Strip, or Israeli Defense Forces soldiers. Wearing and holding a uniform. A machine gun.

Jones was so moved by his experiment that he began reporting his findings internally in December. While the company acknowledged his concerns, it was not ready to take the product off the market. Jones said Microsoft referred him to OpenAI, and when he didn’t hear back from the company, he posted an open letter on LinkedIn asking the startup’s board for the DALL-E 3 (AI Model latest version) was asked to be taken down for investigation. .

Microsoft’s legal department asked Jones to immediately remove his post, and he complied, he said. In January, he wrote a letter to US senators about the issue and later met with staff on the Senate Committee on Commerce, Science and Transportation.

On Wednesday, Jones escalated his concerns, sending one letter to FTC Chair Lena Khan, and another to Microsoft’s board of directors. He shared the letters with CNBC ahead of time.

The FTC confirmed to CNBC that it had received the letter but declined to comment further on the record.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment