OpenAI partners with Los Alamos Lab to save us from AI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

OpenAI is partnering with Los Alamos National Laboratory to study how artificial intelligence can be used to combat biological threats created by non-experts using AI tools. May go, according to announcements from both organizations on Wednesday. The Los Alamos lab, which was founded in New Mexico during World War II to develop the atomic bomb, called the effort a “first-of-its-kind” study on AI biosecurity and the ways in which AI can be used in labs. can be used in order of

The difference between the two statements released Wednesday by OpenAI and the Los Alamos lab is quite striking. OpenAI's statement tries to paint the partnership as simply a study of how AI “can be safely used by scientists in laboratory settings to advance bioscientific research.” And yet the Los Alamos lab puts a lot of emphasis on the fact that previous research “found that ChatGPT-4 provided a slight advance in providing information that could lead to the creation of biological threats.”

Much of the public debate about the dangers posed by AI has focused on the creation of a self-aware entity that could potentially develop a mind of its own and somehow harm humanity. Some fear that achieving AGI—advanced general intelligence, where AI can perform advanced reasoning and logic rather than acting as a fully automated word generator—will lead to a Skynet-style situation. can become And while many AI boosters like Elon Musk and OpenAI CEO Sam Altman have leaned toward this feature, it seems the more immediate threat to address is making sure people bioweapon it. Do not use tools like ChatGPT to create

“AI-driven biohazards may pose a significant threat, but existing work has not assessed how multimodal, frontier models of biohazards can be used by non-experts,” the Los Alamos lab said in a statement published on it. How to lower the barrier to entry to create risk.” website

The different positioning of the two organizations' messages likely boils down to the fact that OpenAI may be reluctant to acknowledge the national security implications of highlighting that its products could be used by terrorists. To put a finer point on this, the Los Alamos statement uses the terms “threat” or “threats” five times, while the OpenAI statement uses it only once.

“The potential benefit of increasing AI capabilities is endless,” Eric Lebron, a research scientist at Los Alamos, said in a statement Wednesday. “However, measuring and understanding any potential risks related to biohazards or misuse of advanced AI remains largely unexplored. This work with OpenAI will help assess current and future models, responsible development and deployment of AI technologies. is an important step towards establishing a framework to ensure

Los Alamos sent a statement to Gizmodo that was generally optimistic about the future of the technology despite the potential risks:

AI technology is exciting because it has become a powerful engine of discovery and progress in science and technology. While this would largely lead to positive benefits for society, it is conceivable that in the hands of a bad actor, the same models could use it to synthesize information that would result in biological threats. How to lead” is likely to arise. It is important to consider that AI itself is not a threat, but how it can be misused, that is the threat.

Previous evaluations have mostly focused on understanding whether such AI technologies can provide accurate “how-to guides.” However, while a bad actor may have access to the right guide to do something nefarious, that doesn't mean they will. For example, you may find that you need to maintain sterility when culturing cells or use large-scale spikes but have no experience doing this before. So it can be very difficult to accomplish.

Zooming out, we're trying to understand more broadly where and how these AI technologies add value to workflows. Accessing information (for example, creating an accurate protocol) is one area where this can happen, but it's less clear how well these AI technologies can help you learn how to implement a protocol in a lab. How to do it successfully (or other real-world activities such as kicking a soccer ball or painting a picture). Our first pilot technology evaluation will look at understanding how AI enables individuals to learn how to perform protocols in the real world, allowing us to not only see how this can help enable science, but Also whether it would enable a bad actor to perform. A nefarious activity in the laboratory.

The Los Alamos Lab's efforts are being coordinated by the AI ​​Risks Technical Assessment Group.

Correction: An earlier version of this post originally cited a Los Alamos statement from OpenAI. Gizmodo regrets the error.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment