AI accidents are on the rise – and now they’re being tracked like software bugs • The Register

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The interview Fake photos of Donald Trump supported by black voters, middle school students taking lewd photos of their female classmates, and Google’s Gemini chatbot failing to produce accurate photos of white people.

These are some of the latest disasters listed on the AI ​​Accidents Database – a website that keeps track of all the different ways technology can go wrong.

Initially launched as a project under the auspices of the Partnership on AI, a group that seeks to make AI beneficial to society, the AI ​​Incident Database is now a non-profit organization funded by Underwriters Laboratories. Provided – Largest and oldest (est. 1894) independent. testing laboratory in the United States. It tests everything from furniture to computer mice, and its website has so far listed over 600 unique automation and AI-related events.

“There’s a huge asymmetry of information between AI system developers and public users — and it’s not fair,” argued Patrick Hall, an assistant professor at the George Washington University School of Business who currently runs the AI ​​Incident Database. Serving on the Board of Directors. . He told Register: “We need more transparency, and we feel our job is just to share that information.”

The AI ​​incident database is built on the website of the nonprofit MITRE-founded CVE program, or National Highway Transportation Safety Administration, which reports publicly disclosed cybersecurity threats and vehicle accidents. “Anytime there’s a plane crash, a train crash, or a major cyber security incident, it’s been common for decades to record what happened so we can try to understand what went wrong and Then don’t repeat it again.”

The website is currently maintained by about ten people, plus a handful of volunteers and contractors who review and post AI-related events online. Heather Freese, a senior fellow at Georgetown’s Center for Security and Emerging Technologies focused on AI assessment and director of the AI ​​Incident Database, claims the website is unique in that it addresses the real risks and pitfalls of AI. Focuses on the impact of the world – not only. Vulnerabilities and bugs in software.

The organization currently collects incidents from media coverage and reviews issues reported by people on Twitter. The AI ​​event database listed 250 unique events prior to the release of ChatGPT in November 2022, and now lists over 600 unique events.

Monitoring problems with AI over time reveals interesting trends, and allows people to understand the real, current pitfalls of the technology.

George Washington University’s Hall revealed that about half of the reports in the database are related to generative AI. Some of them are “funny, stupid things” like the title of defective products sold on Amazon: “I can’t fulfill this request” – a clear sign that the seller has used a language for writing descriptions. Used a large model – or other examples of AI-generated spam. But some are “really depressing and serious” — like a Cruise robotaxi that ran over and dragged a woman under its wheels in an accident in San Francisco.

“AI is mostly a wild west right now, and the attitude is to go fast and break things,” he lamented. It’s unclear how technology is shaping society, and the team hopes the AI ​​event database can provide insight into the ways it’s being misused and highlight unintended consequences. could – in the hope of better informing developers and policy makers so that they can improve or regulate their models. The most pressing risks.

“There’s a lot of hype around it. People talk about an existential threat. I believe AI could pose very serious threats to human civilization, but it’s clear to me that some of them are more real. The dangers of the world — like a lot of self-inflicted injuries driving cars or, you know, perpetuating biases through algorithms that are used in consumer finance or employment. That’s what we see.”

“I know we’re losing a lot, right? Not everything is being reported or captured by the media. A lot of people don’t even realize that the damage they’re doing is being done by AI. “Yes,” observed Freys. “I expect the physical toll to be very high. We are watching. [mostly] The psychological damage and other intangible damage from big language models — but once we have generative robotics, I think the physical damage will be much greater.”

Freeze is most concerned about the ways in which AI can undermine human rights and civil liberties. They believe that gathering AI events will reveal whether policies have made the technology safer over time.

“You have to measure things to fix things,” Hall added.

The organization is always looking for volunteers and is currently focused on holding more events and raising awareness. Friese emphasized that the group’s members are not AI luddites: “We might come across as quite anti-AI, but we’re not. We actually want to use it. We just want good stuff. are.”

Hall agreed. “In order to advance the technology, one has to work only to make it safe,” he said. ®

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment