Israel’s Lavender: What can go wrong when AI is used in military operations?

In this episode of GZERO AI, Taylor Owen, a professor at McGill University’s Max Bell School of Public Policy and director of its Center for Media, Technology and Democracy, is examining the Israeli Defense Forces’ use of an AI system called Lavender to target Hamas operatives. . While it reportedly shares hallucination issues familiar to AI systems like ChatGPT, the cost of mistakes on the battlefield is incredibly severe.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

So last week, six Israeli intelligence officials spoke with an investigative reporter for +972 magazine about what may be the most dangerous weapon in the Gaza war right now, an AI system called Lavender.

As I discussed in an earlier video, the Israeli military has been using AI in its military operations for some time. This is not the first time the IDF has used AI to identify targets, but historically, these targets have had to be vetted by human intelligence officers. But after the October 7 Hamas attack, the guardrails were removed, and the military authorized its officers to bomb targets identified by the AI ​​system, according to sources for this story.

I should say that the IDF denies this. In a statement to the Guardian, he said, “Lavender is only a database intended to cite intelligence sources.” If true, however, it means we’ve crossed a dangerous Rubicon in the way these systems are used in warfare. Let me just acknowledge the comments that these discussions are ultimately about the systems that take people’s lives. This raises the debate about whether we use them, or how we use them, or how we manage and monitor them, both of which are extremely difficult but also urgent.

In a sense, these systems and the promises they are based on are not new. Technologies like Palantir have long promised to claim more and more data. At their core, all of these systems work the same way, with users uploading raw data into them, in this case, the Israeli army’s known Hamas operatives, location data, social media profiles, cell phone information, And then it loads the data. Others are used to create profiles of potential militants.

But of course, these systems are only as good as the training data they are based on. “Some of the data they used came from the Hamas-run Ministry of Internal Security, which is not considered militant,” said a source who worked with the team that trained Lavender. “Even if you believe these people are legitimate targets, using their profiles to train an AI system means the system is more likely to target civilians,” the source said. ” And that seems to be what’s happening. “Lavender is 90% accurate,” the source says, but that raises deeper questions about how accurate we expect and demand from these systems. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT cheats 10% of the time, we’re probably fine with that. But if an AI system targets innocent civilians for murder 10 percent of the time, most people would consider that an unacceptable level of damage.

With the rise of AI systems in the workplace, it seems likely that militaries around the world will start adopting technologies like Lavender. Countries around the world, including the US, have earmarked billions of rupees for AI-related military spending, which means we need to update our international laws for the AI ​​era as soon as possible. We need to know how accurate these systems are, what data they are being trained on, how well their algorithms are identifying targets, and we need to monitor the use of these systems. It is not hyperbolic to say that the new laws in this space will literally be the difference between life and death.

I’m Taylor Owen, and thanks for watching.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment