The UK expands the AI ​​Safety Institute to San Francisco, the home of OpenAI.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The US iteration of the AI ​​Safety Institute aims to recruit a team of technical staff led by a research director. In London, the institute currently has a team of 30 people. It is headed by Ian Hogarth, a prominent British tech entrepreneur who founded music concert discovery site SongKick.

In a statement, UK technology minister Michelle Donlon said the US rollout of the AI ​​Safety Summit “represents British leadership in AI in action.”

“This is an important moment in the UK's ability to study both the risks and the potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to benefit from our expertise. Because we continue to lead the world in AI safety.”

The expansion “will allow the UK to take advantage of the wealth of tech talent available in the Bay Area, engage with the world's largest AI labs in both London and San Francisco, and work with the United States to advance AI safety for the public.” Interest will allow to strengthen the relationship, “said the government.

San Francisco is home to OpenAI, the Microsoft-backed company behind the viral AI chatbot ChatGPT.

The AI ​​Safety Institute was founded in November 2023 during the AI ​​Safety Summit, a global event held in England's Bletchley Park, home of World War II codebreakers, to promote cross-border collaboration on AI safety. Tried to promote.

The US expansion of the AI ​​Safety Institute comes on the heels of the AI ​​Seoul Summit in South Korea, which was first proposed at last year's UK summit in Bletchley Park. The Seoul summit will be held on Tuesday and Wednesday.

Since the AI ​​Safety Institute was established in November, it has made progress in evaluating frontier AI models from some of the industry's leading players, the government said.

It said on Monday that several AI models mastered cybersecurity challenges but struggled to meet more advanced challenges, while several models demonstrated PhD-level knowledge of chemistry and biology.

Meanwhile, all of the models tested by the institute remained highly vulnerable to “jailbreaking,” where users trick them into producing reactions that aren't allowed under their content guidelines. , while some will produce harmful results even without efforts to prevent them.

According to the government, the tested models were also unable to complete more complex, time-consuming tasks without humans present to supervise them.

He did not name the AI ​​models that were tested. The government previously got OpenAI, DeepMind, and Anthropic to agree to open up their coveted AI models to the government to inform research about the risks associated with their systems.

The development comes as the UK has faced criticism for not introducing formal regulations for AI, while other jurisdictions, such as the European Union, are pushing ahead with AI-tailored laws.

The EU's landmark AI Act, the first major piece of legislation of its kind for AI, is expected to become the blueprint for global AI regulations once adopted and enforced by all EU member states. will go

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment