US intelligence agencies were using generative AI 3 years before ChatGPT was released.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Long before the rise of creative AI, a Silicon Valley firm contracted to collect and analyze declassified data related to illegal Chinese fentanyl trafficking made a compelling case for its adoption by US intelligence agencies. Made it.

The results of the operation went far beyond the human analysis alone, finding twice as many companies and 400% more people engaged in the illegal or dubious trade in deadly opioids.

Excited US intelligence officials publicly touted the findings — the AI ​​made connections based on mostly Internet and dark web data — and shared them with Beijing officials, urging a crackdown.

A key aspect of the 2019 operation, called Sable Spear, that hasn't been reported before: The firm used generative AI to provide US agencies — three years after the launch of OpenAI's groundbreaking ChatGPT product First — with a summary of evidence for potential criminal cases, saving countless hours of work.

“You wouldn't be able to do that without artificial intelligence,” said Brian Drake, the Defense Intelligence Agency's then-director of AI and project coordinator.

The contractor, Rhombus Power, would later use generative AI to predict Russia's full-scale invasion of Ukraine with 80% certainty for a different US government client. Rhombus says it also warns government users, whom it declines to name, of impending North Korean missile launches and Chinese space operations.

U.S. intelligence agencies are scrambling to embrace the AI ​​revolution, believing that otherwise the data will grow exponentially as sensor-based surveillance technology further shrinks the planet.

But officials are well aware that the tech is young and fragile, and that it's generating AI—predictive models trained on vast data sets to make on-demand decisions about text, images, video and humans. A conversation can be created. Deluded

CIA Director William Burns recently wrote in Foreign Affairs that analysts need “advanced artificial intelligence models that can digest open-source and covertly obtained information.” But it won't be easy.

Nand Mulchandani, the CIA's inaugural chief technology officer, believes that because general AI models “hallucinate”, they are best treated as a “crazy, drunken friend”. There are also security and privacy issues: adversaries can steal and poison them, and they can contain sensitive personal data that officers are not authorized to see.

That's not stopping the experiment, though, which is happening mostly in secret.

One exception: thousands of analysts from 18 US intelligence agencies now use a CIA-developed general AI called Osiris. It runs on unclassified and publicly or commercially available data — known as open source. It writes annotated summaries and its chatbot function lets analysts go deeper with questions.

Molchandani said it employs a number of AI models from various commercial providers that he would not name. Nor would he say whether the CIA is using gen AI for any major work on classified networks.

“It's still early days, and our analysts need to know with absolute certainty where the information is coming from,” Molchandani said. He said the CIA is testing all the major general AI models – not committing to any one – in part because AIs continue to leapfrog each other in capability.

Molchandani says that general AI is mostly good as a virtual assistant looking for a “needle in a haystack.” What it will never do, officials insist, is replace human analysts.

Linda Weisgold, who retired last year as the CIA's deputy director of analysis, thinks wargaming will be a “killer app.”

During his tenure, the agency was already using formal AI — algorithms and natural language processing — for translation and tasks that included alerting analysts during business hours to potentially critical developments. The AI ​​won't be able to describe what happened – it will be classified – but it can say “here's something you need to come in and see.”

General AI is expected to augment such processes.

Anshu Roy, CEO of Rhombus Power, believes that its most powerful intelligence application will be in predictive analytics. “That's probably going to be one of the biggest changes in the entire national security realm — the ability to predict what your adversaries might do.”

Rhombus' AI machine pulls over 5,000 data streams in 250 languages ​​collected over 10 years, including global news sources, satellite images and data from cyberspace. It's all open source. “We can track people, we can track objects,” Roy said.

AI big shots for US intelligence agency business include Microsoft, which announced on May 7 that it is offering OpenAI's GPT-4 for top-secret networks, though the product is still classified networks. But must be accredited for work.

A competitor, Primer AI, lists two unnamed intelligence agencies among its customers — including military services, documents posted online for a recent military AI workshop show. It offers AI-powered search in 100 languages ​​to “detect emerging signals of breaking events” and help identify “important people, organizations, places” from sources including Twitter, Telegram, Reddit and Discord. Primer lists the target among the advertised uses of its technology. In a demo at an Army conference days after the October 7 Hamas attack on Israel, company executives explained how their tech separates fact from fiction in the flood of online information from the Middle East.

Prime executives declined to be interviewed.

In the near term, how U.S. intelligence officials use general AI may be less important than how adversaries use it: to pierce U.S. defenses, spread disinformation, and undermine Washington's intentions and capabilities. An attempt to undermine the ability to read.

And because Silicon Valley runs the technology, the White House is also concerned that any general AI models adopted by US agencies could be infiltrated and poisoned, research shows. There is too much risk.

Another concern: ensuring the privacy of “Americans” whose data might be embedded in the larger language model.

“If you talk to a researcher or developer who is training a large language model, and ask them if it's possible to basically delete a single piece of information from an LLM And to forget it — and it has a strong empirical guarantee to forget it — is not something that's possible,” John Beller, AI lead of the Office of National Intelligence, said in an interview.

This is one reason why the intelligence community is not in a “move fast and break things” mode regarding general AI adoption.

“We don't want to live in a world where we go ahead and deploy one of these things, and then realize two or three years from now that they have some information or some impact or something emerging. There's behavior we didn't anticipate.” Baylor said.

This is a concern, for example, if government agencies decide to use AIs to explore bio- and cyber-weapons technology.

William Harting, a senior researcher at the Quincy Institute for Responsible Statecraft, says intelligence agencies should carefully evaluate AIs for potential abuses that lead to unintended consequences such as illegal surveillance or civilian casualties in conflict. Cause an increase.

“All this comes in the context of repeated incidents where the military and intelligence sectors have emphasized “miracle weapons” and revolutionary methods – from electronic warfare in Vietnam to the Star Wars program of the 1980s.” revolution in military affairs”. 1990s and 2000s – only to reduce them,” he said.

Government officials insist they are sensitive to such concerns. Also, he says, AI missions will vary widely depending on the agency involved. No one fits one size.

Take the National Security Agency. It blocks communication. or the National Geospatial-Intelligence Agency (NGA). His job involves seeing and understanding every inch of the planet. Then there's measurement and signature intel, which a number of agencies use to track threats using physical sensors.

Supercharging such missions with AI is an obvious priority.

In December, the NGA issued a request for proposals for an entirely new type of generative AI model. The goal is to use the images it collects — from satellites and on the ground — to harvest accurate geographic intel with simple voice or text prompts. General AI models don't map roads and railways and “don't understand the basics of geography,” Mark Munsell, NGA's director of innovation, said in an interview.

Munsell said at an April conference in Arlington, Virginia, that the U.S. government has currently modeled and labeled only 3 percent of the planet.

General AI applications also make a lot of sense for cyber conflict, where attackers and defenders are in constant combat and automation is already underway.

But a lot of important intelligence work has nothing to do with data science, says Zachary Tyson Brown, a former defense intelligence officer. He believes that intel agencies will invite disaster if they adopt gen AI too quickly or completely. Models do not argue. They only predict. And their designers can't fully explain how they work.

Not the best means, then, of mixing wits with rival masters of deception.

Brown recently wrote, “Intelligence analysis is usually like the old trope about putting together a jigsaw puzzle, only someone else is constantly trying to steal your pieces while you work with the pile. “Doing so also puts very different pieces of the puzzle together,” Brown wrote recently. A CIA journal. Analysts work with “incomplete, ambiguous, often contradictory pieces of partial, unreliable information.”

They rely heavily on instinct, peers and institutional memory.

“I don't see AI replacing analysts anytime soon,” said Wes Gould, a former CIA deputy director for analysis.

Quick life-and-death decisions sometimes have to be made based on incomplete data, and current general AI models are still too vague.

“I don't think it would ever be acceptable for a president,” Weisgold said, “to come into the intelligence community and say, 'I don't know, the black box just told me that.'”

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment