In the rush to implement AI, ethics are left behind in many companies.

AI can be pretty scary if it's not regulated.
South_Agency/Getty Images
WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

  • Companies are increasingly integrating generative AI technology to increase productivity.
  • But experts worry that efforts to tackle AI threats are lagging behind.
  • A senior BCG partner said responsible AI efforts are moving “nowhere near as fast as they should be.”

Companies are racing to deploy generative AI technology in their operations. Launch of ChatGPT In 2022

Executives say they are excited about how AI increases productivity, analyzes data, and reduces busy work.

According to Microsoft and LinkedIn's 2024 Work Trends report, which surveyed 31,000 full-time workers between February and March, four out of five business leaders believe their company needs to embrace technology to stay competitive. Required.

But adopting AI in the workplace also presents risks, including reputational, financial and legal damage. The challenge of dealing with them is that they are vague, and many companies are still trying to understand how to identify and measure them.

AI programs run responsibly should include strategies for governance, data privacy, ethics, and trust and security, but experts who study risk say the programs have not innovated.

Ted Rosalind, managing director and senior partner at Boston Consulting Group, told Business Insider that efforts to use AI responsibly in the workplace “are nowhere near as fast as they should be.” According to BCG, these programs often require substantial investment and at least two years to implement.

It's a huge investment and time commitment, and company leaders seem more focused on allocating resources to rapidly develop AI that increases productivity.

“Establishing good risk management capabilities requires significant resources and expertise, which not all companies can afford or have available to them today,” said researcher and policy analyst Nanjira Sam. told MIT Sloan Management Review. “Demand for AI governance and risk experts is outpacing supply,” he added.

According to , investors need to play a more important role in funding the tools and resources for these programs. Noreena Singh, founder of Credo AI, a governance platform that helps companies comply with AI regulations. Funding for generative AI startups to reach $25.2 billion in 2023, according to one A report from Stanford's Institute for Human-Centered Artificial Intelligencebut it's not clear how much companies focusing on responsible AI have benefited.

“The venture capital environment also reflects a disproportionate focus on AI innovation over AI governance,” Singh told Business Insider via email. “To adopt AI responsibly at scale and speed, equal emphasis must be placed on ethical frameworks, infrastructure, and tooling to ensure sustainable and responsible AI integration across sectors.”

Legislative efforts are underway to fill this gap. In March, The European Union has approved a law on artificial intelligence., which assigns the risks of AI applications to three categories and bans those with unacceptable risks. Meanwhile, the Biden administration signed a major executive order in October demanding more transparency from big tech companies developing artificial intelligence models.

But with the pace of innovation in AI, government regulations may not be enough at the moment to ensure companies are protecting themselves.

“We risk a huge liability deficit that can stop AI initiatives before they reach production, or worse, lead to failures that result in unintended societal risks,” Singh said. , damage to reputation, and regulatory complications if built into production,” Singh said.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment