On Tuesday, the Department of Homeland Security (DHS) announced 10 artificial intelligence experts are being recruited as the first members of its new “AI Corps”, which will eventually become a 50-member advisory group. A press release explained that the new services will “help DHS responsibly take advantage of new technology and reduce risks in the homeland security enterprise.”
DHS says AI shows promise for “strategic mission areas,” such as “combating the trafficking of fentanyl, combating online sexual exploitation and abuse of children, providing immigration services, maintaining critical infrastructure.” Strengthening, and Enhancing Cyber Security.”
DHS Secretary Alejandro Meyerkas even said that the department “has no domain” that “can't use AI, if we actually learn to use it responsibly” and understand the implications for civil liberties. But Americans shouldn't trust the government to uphold that standard and assume they'll be immune from AI-related harm just because DHS says it will use the technology in certain ways. .
DHS “regularly develops unproven programs that rely on algorithms and put the rights of millions of Americans at risk,” he said. wrote Faiza Patel and Spencer Reynolds of the Brennan Center for Justice last month. For one, “screening, vetting, and watchlisting regimes that monitor potential terrorism appear to have never been tested.” Patel and Reynolds argued that DHS “also operates a social media monitoring program that collects information about Americans' political views and activities” despite “no demonstrated security value”.
Earlier this year, the Government Accountability Office (GAO) pointed out that DHS, although “required to maintain an inventory of AI use cases,” did not publish accurate “Although DHS has a process for reviewing use cases before they are included in the AI inventory, the agency acknowledges that it does not verify whether a use is properly defined as AI. is,” GAO found.
The AI Corps is one of several recent initiatives to incorporate AI into more DHS activities. Earlier this year, the dept started A $5 million set of pilot programs that will “use AI models like ChatGPT to help investigate child abuse material, human and drug trafficking,” “chatbots to train immigration officials” will use,” and help prepare disaster relief plans. The New York Times Reported in April, Reps. Lou Correa (D–Calif.) and Morgan Luttrell (R–Texas) introduced a bill requiring DHS to develop a plan to implement AI and other new technologies at the US-Mexico border. And an elaborate AI Executive order The AI tools released by President Joe Biden in October repeatedly mention DHS as a key player in researching and applying AI tools.
“DHS's doubling down on machine learning and automated decision-making is a troubling but expected turn considering the federal law enforcement agency's insistence on spending money on the latest shiny toy, regardless of what it contains. Whether or not forced use cases prove ineffective, or threaten civil liberties, argues Matthew Guariglia, senior policy analyst at the digital rights group Electronic Frontier Foundation.
DHS already has. By using AI tools in broad, public-facing ways, such as for flight check-in. Transportation Security Administration lines contactless check-in at airports with “just a photo” of a passenger. Says that DHS. This may sound innocent enough, but it Quantities Mass data collection and mass surveillance of travelers across the country (more than usual, that is). And the chances of mission creep are always there, e.g Collection of other biometric data.
DHS's increased AI adoption will have profound consequences for certain groups, including immigrants. In the immigration space, Guariglia warned, “more and more decisions, potentially critical or life-and-death decisions like who immigrates to the U.S. and who gets asylum, will be made by computers.” (Also, border AI almost certainly won't happen. Limited (Only to the border area, if other methods of border surveillance are any indication.)
“Automated decision-making also means collecting a lot of information and data about people—which can be collected in invasive ways or through unreliable sources. It also raises transparency concerns,” Gaariglia said. continue. If an immigrant is turned away at the border or singled out for questioning “because an algorithm has identified them as a threat, how will the public know when officers are using the machines?” being directed and on what data was this decision based?
AI tools show great promise, but it's important to remember that they are a work in progress. Immigration processing, fentanyl intervention, and disaster relief may all ultimately benefit. But is it wise to leave these high-stakes activities in the hands of the federal government using AI tools whose moral weight and operation it does not fully understand?