Can we make artificial intelligence bias-free?

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Artificial intelligence built on mountains of potentially biased information has created a real threat of automated discrimination, but is there a way to re-educate the machines?

For some people the question is very important. In this Chat GPT era, AI will make more and more decisions for healthcare providers, bank lenders or lawyers, using whatever was sourced from the internet as source material.

AI's basic intelligence, therefore, is only as good as the world it comes from, as full of reason, wisdom and usefulness as possible, as well as hate, prejudice and malice.

“It's dangerous because people are embracing AI software and adopting it and really relying on it,” said Joshua Weaver, director of the Texas Opportunity and Justice Incubator, a legal consultancy.

“We can get into this feedback loop where bias in our own self and culture informs bias in the AI ​​and becomes a kind of reinforcing loop,” he said.

Advertisement – Scroll down to continue.


Ensuring that technology more accurately reflects human diversity is not just a political choice.

Other uses of AI, such as facial recognition, have seen companies land in hot water with authorities for discrimination.

The case was against US pharmacy chain Rite Aid, where in-store cameras falsely tagged customers, particularly women and people of color, as shoplifters, according to the Federal Trade Commission.

Advertisement – Scroll down to continue.


ChatGPT-style generative AI, which can produce the semblance of human-level reasoning in mere seconds, opens up new opportunities to get things wrong, experts worry.

AI giants are well aware of this problem, fearing that their models may misbehave, or overly reflect Western society when their user base is global.

“We have people asking questions from Indonesia or America,” said Google CEO Sundar Pichai, explaining why requests for photos of doctors or lawyers would try to reflect ethnic diversity.

Advertisement – Scroll down to continue.


But these considerations can reach ridiculous levels and lead to angry accusations of excessive political correctness.

That's what happened when Google's Gemini Image Generator pulled up a photo of German soldiers from World War II that included a black man and an Asian woman.

“Obviously, the mistake was that we overapplied … where it should never have been applied. It was a bug and we got it wrong,” Pichai said.

Advertisement – Scroll down to continue.


But Sasha Luccioni, a research scientist at Hugging Face, a leading platform for AI models, warned that “thinking that there is a technological solution to bias is already on the wrong track.”

Generative AI is primarily about whether the output “matches the user's expectation” and is largely subjective, he said.

Jayden Ziegler, head of product at Alembic Technologies, cautioned that the large models on which ChatGPT is built “can't make any arguments about what's biased or what's not, so they're about it.” Can't do anything.”

Advertisement – Scroll down to continue.


At least for now, it's up to humans to make sure the AI ​​produces whatever is appropriate or meets their expectations.

But given the frenzy surrounding AI, this is no easy task.

There are approximately 600,000 AI or machine learning models available on Hugging Face's platform.

“Every couple of weeks a new model comes out and we're kind of scrambling to try to assess and document biases or undesirable behaviors,” Lucioni said.

One method under development is called algorithmic disgorgement that allows engineers to excise material without destroying the entire model.

But there are serious doubts that it can actually work.

Another method would “encourage” the model to go in the right direction, “fine-tune” it, “compensating for right and wrong,” said Ramsiri Harsha, chief technology officer at Pinicon.

Pinecone is a specialist in retrieval augmented generation (or RAG), a technique where the model obtains information from a fixed reliable source.

For Weaver of the Texas Opportunity & Justice Incubator, these “great” efforts to fix prejudice are “projections of our hopes and dreams for what a better version of the future might look like.”

But bias “is embedded in what it means to be human and because of that, it's also baked into AI,” he said.

juj-arp/md

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment