Absent AI? We are not quite ready yet.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Gradually, reliance on AI output – in both its operational and generative forms – is increasingly leading to widespread lights-out, hands-off processes. But how much authority will humans have — and should have — to step in and override AI's decisions?

For example, a potentially valuable bank customer could be denied a loan by an AI system. Or, an AI-based recruiting system may provide biased or sexist results.

However, worrying about human intervention to stop AI-driven processes may mean it's too late. James Handler, a professor at Rensselaer Polytechnic Institute and director of RPI's Future of Computing Institute, said things should not get to this stage.

“If the systems that use AI are properly designed to be used interactively, that becomes the wrong way to look at things,” Handler said. “Systems must solve problems together, with human and technical interactions – especially in cases where human expertise is required.”

This is where thoughtful “responsible-by-design” principles can help ensure balanced human interaction with AI systems, said Sunil Sanon, global head of data, analytics and AI at Infosys. “The ease of overriding AI should be a carefully considered design decision based on the specific application and its level of risk, transparency, user expertise, and the changing landscape of AI development.”

Senon pointed to examples of situations where humans have had to intervene to prevent AI-driven decisions. “From filtering flagged transactions to identify false positives in fraud detection tools, to providing security overrides in self-driving cars, and making critical decisions on sensitive social media content, human ethical considerations , critical thinking, and training to deal with situations are beyond the realm of AI.”

This depends a lot, of course, on the criticality of the use case. Timothy Horfield, head of product marketing at ORO Labs, said the degree of human oversight depends on “the specific type of AI and our tolerance for risk in specific domains.” In marketing, for example, machine learning has proven to be very effective. The right offer to make to a particular person because it outperforms simple rule-based methods. If it fails, the associated business risk is low.

High-risk areas can include financial transactions, transportation, and even the creation of certain types of content, Horfield continued. “Here, “it is important that humans provide oversight and have the ability to make practical decisions in unusual situations.”

Low-risk operational AI tasks could include “personalized video recommendations and chat interactions to beauty apps, which can get out of hand,” Sanon said. “Similarly, mobile brands leverage AI and machine learning to predict returns and improve warranty delivery. However, for high-risk tasks or emerging regulations, such as creative content creation — Creative AI — Human supervision is essential.

In high-risk AI systems, “businesses need to insist on ease of review and override,” Sanon continued. “In fact, in many AI systems the output is defined as a first draft, so in creating a financial report or creating an ad with text and image, expect only a basic version with humans. After which it will be completed.”

While in low-risk AI systems “such as chatbots or summarizing product reviews on websites, continuous human review is not only less necessary but impractical,” he added.

With humans and AI working together, “humans provide domain expertise and critical thinking, while AI handles data analysis and pattern recognition,” said Sanen. “In all human in the loop systems, humans often overrule AI decisions in the early stages. Over time, two things happen, AI learns and improves as humans become more familiar with the system. AI The ability to further explain also causes AI decisions to be overruled by humans.

Ultimately, explainability is key, and “introducing AI into a process should take the form of recommendations and a level of transparency in how decisions are made,” Harfield said. “In this way, decision-making is accelerated, and trust is built in the system even before full automation should it be the right way.”

The bottom line is that relying on absent AI for high-level decisions mainly applies to narrow scenarios right now, and will remain so for some time to come. “There may be some places where AI systems can handle tasks autonomously, but these are usually small niches — like flying a drone between two predetermined points — and not tasks that most That's what people think of when they talk about advanced AI.” Overall, Rensselaer's Hendler said, adding, “I don't believe AI has enough capacity — and therefore trust — for hand-off processes, and I don't see that changing anytime soon. “

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment