When Science Fiction Becomes Science Fact: The AI ​​Doubt

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leading AI scientists warned at a policy forum of significant risks associated with the rapid development of AI technologies. They recommend that large technology firms and public funders devote at least one-third of their budgets to risk assessment and mitigation. They also advocate for strict global standards to prevent misuse of AI and emphasize the importance of proactive governance to steer AI development towards beneficial outcomes and avoid potential disasters. Credit: SciTechDaily.com

AI experts recommend significant investment in AI risk reduction and strict global regulations to prevent AI misuse and guide AI development in a safe manner.

Researchers have warned of extreme risks associated with rapidly developing artificial intelligence (AI) technologies, but there is no consensus on how to deal with these risks. In a policy forum, world-leading AI experts Yoshua Benjio and colleagues analyze the risks of advancing AI technologies.

These include social and economic impacts, malicious use, and the potential loss of human control over autonomous AI systems. They recommend proactive and adaptive governance measures to mitigate these risks.

The authors urge large technology companies and public funders to invest more, allocating at least one-third of their budgets to assessing and mitigating these risks. They call on international legal bodies and governments to implement standards that prevent the misuse of AI.

Call for responsible AI development.

“To steer AI toward positive outcomes and away from disaster, we need to innovate. There is a responsible path — if we have the common sense to take it,” the authors write.

They highlight the race among technology companies around the world to develop general AI systems that can match or exceed human capabilities in many key domains. However, this rapid development also brings risks at the societal level that can exacerbate social injustices, undermine social stability, and lead to large-scale cybercrime, automated warfare, mass manipulation at will and Can enable extensive monitoring.

Among the prominent concerns is the possibility of autonomous AI systems losing control, which would render human intervention ineffective.

Immediate priorities for AI research

AI experts say that humanity is not adequately prepared to deal with these potential AI threats. They note that, compared to efforts to develop AI capabilities, far fewer resources are devoted to ensuring the safe and ethical development and deployment of these technologies. To address this gap, the authors outline immediate priorities for AI research, development, and governance.

For more on this research, see AI scientists warn of risks beyond human control.

Citation: “Extreme AI Threats Amidst Rapid Development” by Yoshua Benjiu, Jeffrey Hinton, Andrew Yao, Don Song, Peter Abel, Trevor Darrell, Yuval Noah Harari, Ya Qin Zhang, Lin Zhou, Shi Shilio Schwartz, Gillian Hadfield, to manage”. Jeff Clooney, Tegan Maharaj, Frank Hutter, Atlam Guinness Bedian, Sheila McElrath, Kiki Gau, Ashwin Acharya, David Krueger, Enka Dragon, Philip Tor, Stuart Russell, Daniel Kahneman, John Brauner and Soren Manderman, 20 May 2020 science.
DOI: 10.1126/science.adn0117

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment