The intelligence community gets a chief AI officer.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

The top US intelligence agency has tapped a research official to lead the intelligence community’s work on AI.

John Beller, who serves as director of national intelligence, Avril Haines’ top science and technology adviser, has been named chief artificial intelligence officer in the Office of the Director of National Intelligence. Baylor confirmed his additional role today during a speech at an event hosted by the Intelligence and National Security Alliance in Arlington, Va.

Baylor now leads a council of chief AI officers from 18 elements of the intelligence community, including the CIA, the National Security Agency and the Defense Intelligence Agency. He said the council, which reports directly to Haynes, has been meeting every two weeks for the past two months.

“What we’re focusing on as a group is AI governance,” Beller said.

He said the group is writing the first IC-wide directive on AI. It will describe what intelligence agencies need to do to deploy AI and machine learning.

“Things like documents, standards, [application programing interfaces], what kind of data documentation is needed, how it all fits together, responsible adoption, ongoing monitoring,” Beller said, explaining what is done in the directive. “An individual developer is responsible. Responsibility for stewardship, management and leadership. We’re really focused on that responsible, ethical adoption.”

He added that the directive would also spell out civil liberties and privacy protections that need to be included in algorithms developed by the intelligence community.

The new AI Council is also leading the update of ODNI’s AI strategy.

“We want to make sure we have a strong vision, which we think is important for AI and IC, to drive the conversation of these resources,” Baylor said.

Lawmakers in China and other countries have also urged the intelligence community to prioritize AI adoption with safeguards.

The FY 2024 National Defense Authorization Act directs the DNI to establish new policies “for the acquisition, adoption, development, use, coordination, and maintenance of artificial intelligence capabilities,” including AI used by intelligence agencies. Minimum guidelines for performance of models.

Beieler has a background in data science and machine learning. Before joining ODNI in 2019, he led research programs on human language technology, machine learning and vulnerabilities in AI at the Intelligence Advanced Research Projects Agency.

At ODNI, he has also helped lead the intelligence community’s Augmenting Intelligence Using Machines or “AIM” strategy. With many intel agencies dealing with a deluge of data, AIM aims to integrate AI adoption and automation across intelligence agencies.

Although intelligence agencies have been using artificial intelligence and forms of machine learning for decades, the emergence of widely available large language models like ChatGPT has added both new concerns and renewed urgency to the AI ​​race.

“A lot of that focus is on making sure that the people who are going to be using these tools understand them,” Beller said.

ODNI has already funded various training and upskilling programs in intelligence agencies. And he acknowledged the challenges of generative AI and other large language models, such as deceptive errors, copyright issues, and privacy concerns.

“Educating a broad base of analysts, assemblers and the IC workforce to these things, so that they understand some of these failure modes, but do so in a way that they can quickly implement the technology,” Beller said. Don’t finish.” “That’s the hard part of upskilling across the workforce.”

Copyright © 2024 Federal News Network. All rights reserved. This website is not intended for users within the European Economic Area.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment