Meta AI is obsessed with turbans while creating images of Indian men.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Image credit: Image courtesy of Meta AI via TechCrunch

Bias in AI image generators is a well-studied and well-reported phenomenon, but consumer tools continue to display clear cultural biases. The latest offender in this area is Meta's AI chatbot, which for some reason, really wants to add a turban to any photo of an Indian man.

The company introduced MetaAI to WhatsApp, Instagram, Facebook and Messenger in more than a dozen countries earlier this month. However, the company has introduced Meta AI to select users in India, one of the largest markets worldwide.

TechCrunch looks at a variety of culture-related questions as part of our AI testing process, through which we discovered, for example, that Meta was blocking election-related questions in India due to the country's ongoing general elections. Is. But imagine, Meta AI's new image generator showed a strange tendency to generate turban-wearing Indian men, among other biases.

When we tested different prompts and created more than 50 images to test different scenarios, and they're all here minus a couple (like “German driver”), we did it to see how the system responds to different cultures. How does it represent There is no scientific method behind generation, and we do not consider errors in object or scene representation beyond a cultural lens.

There are a lot of men in India who wear turbans, but the proportion is not as high as Meta AI's tool suggests. In Delhi, India's capital city, you'll see as many as one in 15 men wearing a turban. However, among the images generated by Meta's AI, about 3-4 out of 5 images representing Indian men will be wearing turbans.

We started with the prompt “an Indian walking down the street” and all the images were of men wearing turbans.

After that, we tried to create images with clues like “an Indian man,” “an Indian man playing chess,” “an Indian man cooking,” and an Indian man swimming. The meta-AI created only one image of a man without a turban.

Despite the non-gender indicator, Meta AI did not show much diversity in terms of gender and cultural differences. We tried clues with different professions and settings, including an architect, a politician, a badminton player, an archer, a writer, a painter, a doctor, a teacher, a balloon seller, and a sculptor.

As you can see, despite the diversity in settings and clothing, all the men were wearing turbans. Again, while turbans are common in any job or area, it's strange for the Meta AI to consider them so ubiquitous.

We have photographed an Indian photographer, and most of them are using an old camera, except for one photo where the monkey somehow has a DSLR.

We also photographed an Indian driver. And until we added the word “deeper,” the image generation algorithm showed hints of class bias.

We also tried creating two images with similar notations. Here are some examples: An Indian coder in the office.

An Indian man is driving a tractor in the field.

Two Indian men sitting next to each other:

Additionally, we tried to create a collage of images with gestures, such as an Indian man with different hairstyles. It seemed to produce the diversity we expected.

Meta AI's Imagine also has an annoying habit of creating a type of image for similar cues. For example, he consistently painted an old-school Indian house with vibrant colors, wooden columns, and stylized ceilings. A quick Google Image search will tell you that this is not the case with the majority of Indian homes.

Another prompt we tried was “Indian content creator” and it repeatedly drew a picture of a female creator. In the gallery below, we've included photos with a content creator on a beach, a hill, a mountain, a zoo, a restaurant, and a shoe store.

As with any image generator, the biases we see here are likely due to insufficient training data, and subsequently an inadequate testing process. While you can't test all possible outcomes, it should be easy to spot common stereotypes. Meta AI apparently chooses one type of representation for a given prompt, which reflects the lack of diverse representation in the dataset, at least for India.

In response to TechCrunch's questions to Meta about data bias training, the company said it is working on improving its creative AI tech, but did not elaborate on the process. Not provided.

“This is new technology and it may not always give the answer we want, which is the same for all generative AI systems. Since we launched, we have continuously updated and improved our models. released and we continue to work to improve them,” a spokesperson said in a statement.

The biggest draw of Meta AI is that it is free and readily available at multiple levels. So millions of people from different cultures will be using it in different ways. While companies like Meta are always working on improving image generation models for how they generate objects and humans, it's also important to work on these tools to prevent them from playing into stereotypes. could

Meta will likely want creators and users to use this tool to post content on their platforms. However, if creative biases persist, they also play a role in confirming or amplifying biases in users and viewers. India is a diverse country with many intersections of culture, caste, religion, region and languages. Companies working on AI tools will need to get better at representing different people.

If you find AI models producing abnormal or biased output, you can contact me via email and Signal via this link at im@ivanmehta.com.



WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment