Salesforce exec warns of AI winter led to consumer trust issues

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Light-fast advances in AI have required some defenders and evolving philosophies about how to ethically incorporate the technology into the workplace. Paula Goldman, chief ethical and humane officer at Salesforce, said AI must play the role of pilot alongside humans — not on autopilot. good fortuneBrainstorm AI conference in London on Monday.

“We need next-level controls. We need people to be able to understand what’s going on in the AI ​​system,” he said. good fortuneExecutive News Editor Nick Lichtenberg. “And most importantly, we need to design AI products that take into account what AI is good and bad at—but also what people are good and bad at in their decision-making. “

One of Goldman’s biggest concerns amid growing consumer concerns is AI’s ability to produce complete content, including content free of racial or gender bias and excessive user-generated content such as deepfakes. She warns that unethical applications of AI could reduce funding and development of the technology.

“It’s possible that the next AI winter is due to trust issues or people’s adoption issues with AI,” Goldman said.

The future of AI productivity in the workplace will depend on training and people’s willingness to adopt new technologies, he said. To foster trust in AI products—especially among employees using the applications—Goldman recommends implementing “mental friction” to ensure AI tools in the workplace do more good than harm. is essentially a series of checks and balances.

What Salesforce has done to implement ‘mental friction’

Salesforce has begun monitoring potential biases in its use of AI. In fact, the software giant has developed a marketing segmentation product that creates appropriate demographics for email campaigns. While an AI program generates a list of potential demographics for a campaign, it is a human’s job to select the appropriate demographics so that relevant recipients are not excluded. Similarly, the software company has a warning toggle pop-up on generative models on its Einstein platform that include ZIP or postal codes, which are often associated with specific races or socioeconomic statuses.

“Increasingly, we’re moving toward systems that can detect such anomalies and encourage humans to revisit it,” Goldman said.

In the past, biases and copyright infringements have shaken confidence in AI. A study by the MIT Media Lab found that AI software programmed to identify the race and gender of different people had a less than 1% error rate in identifying light-skinned men, but dark-skinned women. had an error rate of 35% in identification, including wells. Celebrities like Oprah Winfrey and Michelle Obama. Jobs that use facial recognition technology for advanced tasks, such as equipping drones or body cameras with facial recognition software to carry out deadly attacks, said study author Joy Bolamwini. Errors in technology are compromised. Similarly, algorithmic biases in healthcare databases can lead to AI software recommending inappropriate treatment plans for certain patients, the Yale School of Medicine found.

Even for industries that don’t live on the line, AI applications have raised ethical concerns, including OpenAI erasing hours of user-generated YouTube content, potentially compromising content creators’ rights. Infringing copyrights without consent. Along with the spread of misinformation and the inability to complete basic tasks, AI has a long way to go before it fulfills its potential as a helpful tool for humans, Goldman said.

But designing better AI features and human-led failsafes to reinforce trust is what Goldman is most excited about for the future of the industry.

“How do you design products that you know who to trust and where you have to take a second look and apply human judgment?”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment