All US federal agencies will now need a senior leader overseeing all AI systems they use, as the government seeks to ensure that the use of AI in public service remains secure.
Vice President Kamala Harris announced OMB’s new guidance in a briefing with reporters and said agencies should also establish AI governance boards to oversee how AI is used within the agency. Agencies must also submit an annual report to the Office of Management and Budget (OMB) that lists all AI systems they use, any risks associated with them, and what they are doing to mitigate those risks. How to plan.
“We have directed all federal agencies to designate a Chief AI Officer with the experience, expertise and authority to oversee all AI technologies used by that agency, and to ensure that “That AI be used responsibly, recognizing that we must have senior leaders in our government who are specifically tasked with overseeing the adoption and use of AI,” Harris wrote. Told the reporters.
The Chief AI Officer does not necessarily have to be a political appointee, although this depends on the structure of the federal agency. Governance boards should be formed by summer.
The guidance expands on previously announced policies outlined in the Biden administration’s AI executive order, which required federal offices to build security standards and increase the number of AI talent working in government.
Some agencies have already begun hiring chief AI officers ahead of today’s announcement. The Department of Justice announced Jonathan Mayer as its first CAIO in February. He will lead a team of cybersecurity experts in exploring how to use AI in law enforcement.
According to OMB Chair Shalinda Young, the US government plans to hire 100 AI professionals by summer.
Part of the responsibility of agencies’ AI officers and governance committees is to frequently monitor their AI systems. Agencies should submit an inventory of AI products that the agency uses, Young said. If any AI system is deemed “sensitive” enough to be delisted, the agency must publicly provide a reason for its exclusion. Agencies must also independently assess the security risk of each AI platform they use.
Federal agencies also have to certify any AI they deploy meets safeguards that “reduce the risks of algorithmic discrimination and provide transparency to the public about how the government uses AI.” ” OMB’s fact sheet provides several examples, including:
When at the airport, passengers will have the ability to opt out of using TSA facial recognition without delay or losing their place in line.
When AI is used to support critical diagnostic decisions in the federal health care system, a human is overseeing the process of verifying the results of the tools and avoiding disparities in access to health care. Is.
When AI is used to detect fraud in government services, there is human oversight of effective decisions and victims have an opportunity to redress the harms of AI.
“If an agency cannot implement these safeguards, the agency must stop using the AI ​​system, unless agency leadership can justify that doing so would adversely affect overall security or rights.” would increase risks or cause unacceptable disruption to critical agency operations,” the fact sheet reads.
Under the new guidance, any AI models, code and data owned by the government must be released to the public as long as they do not pose a threat to government operations.
The United States still does not have laws regulating AI. The AI ​​Executive Order provides guidelines for government agencies under the executive branch on how to approach the technology. Although several bills have been filed to regulate some aspects of AI, there has not been much movement on legislating AI technologies.