The AI ​​revolution is already here.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

In just the past few months, the battlefield has changed like never before, with science fiction visions finally coming true. Robotic systems are unleashed, capable of destroying targets on their own. Artificial intelligence systems are determining which individual humans are to be killed in war, and even how many civilians are to die with them. And making it all the more challenging, this border has been crossed by America’s allies.

Ukraine’s front lines are saturated with thousands of drones, including Kiev’s new Seeker Scout quadcopters that can “self-detect, identify and attack 64 types of Russian ‘military objects’.” can.” They are designed to operate without human supervision, launched to hunt in areas where Russian jamming prevents other drones from operating.

Meanwhile, Israel has launched another phase of algorithmic warfare as it seeks revenge for the October 7 Hamas attacks. As revealed by members of the IDF 972 Magazine, “The Gospel,” is an AI system that sifts through millions of items of data, from drone footage to seismic readings, and marks buildings in Gaza for destruction with airstrikes and artillery. Another system called Lavender does the same for people, using everything from cell phone usage to WhatsApp group memberships to rate their potential Hamas membership on a scale of 1 to 100. could “Where’s Daddy?” are tracked by the system, which sends a signal when they return to their homes, where they can be bombed.

Such systems are just the beginning. The cottage industry of activists and diplomats who previously tried to ban “killer robots” failed for the same reason that open-ended bans on AI research did: the tech is too useful. Every major military, including ours, is working on its equal or better.

There is an ongoing debate in security studies about whether such technologies are evolutionary or revolutionary. In many ways, it has become akin to medieval scholars debating how many angels could dance on the head of a pin when the printing press was about to change the world around it. It’s really about what one chooses to focus on. Imagine, for example, writing about the Spanish Civil War in the 1930s. You can note the continued use of rifles and trenches by both sides, and argue that little is changing. Or you might see tanks, radios, and airplanes moving forward in ways that will not only change the face of warfare, but also raise new questions for politics, law, and ethics. (And even art: think of the aerial bombardment of Guernica, made famous by Picasso.)

What is indisputable is that the economy is undergoing a revolution through AI and robotics. And the industrial revolutions of the past dramatically changed not only the workplace, but also war and the politics surrounding it. World War I brought mechanized slaughter, while World War II ushered in the nuclear age. It will be the same for him.

Yet AI is different from every other new technology in history. Its systems become ever more intelligent and autonomous, literally next to each other. No one had to argue what a bow and arrow, a steam engine, or an atomic device could be allowed to do on its own. Nor did they face the “black box” problem: where the scale of data and complexity meant that neither a machine nor its human operator could effectively communicate “why” it made a decision. What did he do?

And we’re just at the beginning. AI’s battlefield applications are rapidly expanding from drones to information warfare and beyond, and each new type raises new questions. Dilemmas also arise when AI merely delegates authority to a human commander. Such “decision aids” offer dramatic advantages in speed and scale: the IDF system sifts through millions more items of data, generating target lists 50 times faster than a team of human intelligence officers. The carnage that accompanies it increases exponentially. With Angel’s support, Israeli forces hit more than 22,000 targets in the first two months of the Gaza war, nearly five times more than in a similar battle a decade earlier. And Lavender allegedly “marked some 37,000 Palestinians as suspected ‘Hamas’ militants, most of them juniors, for murder.” It also calculated the potential collateral damage for each strike, with IDF members reporting acceptable damage between 15 and 100 expected civilian casualties.

The problems with AI in war go beyond the technical. Will AI-powered strategies yield the desired results, or will we live by the moral of every science fiction story, where a mechanical servant ultimately harms its human master? In fact, Israel seems bent on proving this issue in real time. As one IDF officer who used lavender said, “In the short term, we’re safer, because we hurt Hamas. But I think we’re less safe in the long run. I Seeing how all the bereaved families in Gaza – which is almost all – will be encouraged. [people to join] Hamas has been down for 10 years. And it will be very easy [Hamas] to recruit them.”

The political, ethical and legal quagmire surrounding AI in warfare demands urgent attention, with a rethinking of everything from our training to acquisition to ideological projects. But ultimately we must recognize that there is one aspect that is not changing: human accountability. While it’s easy to blame faceless algorithms for machine action, ultimately a human is behind every important decision. It’s similar to how driverless car companies try to avoid responsibility when their poorly designed and bogus marketing machines kill people on our streets. In systems like Gospel and Lavender, for example, it was humans, not machines, that decided to change the level of concern about civilian casualties or to tolerate a 10 percent error rate.

Just as in business, we need to set up frameworks to govern the use of AI in warfare. This must now include not only mitigating risks, but also ensuring that the people behind them are forced to take better care in both their design and use, including understanding that they are politically and legally But they are ultimately responsible. This also applies to US partners in industry and geopolitics, who are now pushing those boundaries, enabled by our budget money.

The future of warfare hangs in the balance, and the choices we make today will determine whether AI triggers a new era of digital destruction.

PW Singer is a best-selling author of such books on war and technology. Wired for battle, Ghost Fleet, And burn in, Senior Fellow at New America, and co-founder of Useful Fiction, a strategic narrative company.

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Leave a Comment