Time for an AI arms control agreement?
AI’s increasing presence on the battlefield is a major concern for strategic stability. In the ongoing conflict in Gaza, the alleged use of AI for targeting should raise alarm bells and motivate greater efforts towards regulation and arms control.
It is only a matter of time before similar concerns become visible in the Indo-Pacific, where many states are aggressively raising their military spending despite economic difficulties. The Gaza example shows that Indo-Pacific states cannot be bystanders in an environment where there are currently no constraints on developing and using military AI.
Targeting humans with AI
In a recent report, +972 Magazine – an online publication run by Israeli and Palestinian journalists – drew on anonymous insider interviews to claim that the Israel Defense Forces (IDF) have been using an AI-based system called “Lavender” to identify human targets for its operations in Gaza. Worryingly, the same report claimed that “human personnel often served only as a “rubber stamp” for the machine’s decisions.”
Responding to these claims, the IDF issued a statement to clarify that it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist.”
However, a report by The Guardian has cast doubt on the IDF’s rebuttal by referring to video footage from a conference in 2023 where a presenter from the IDF described the use of a tool for target identification that bears similarity to Lavender.
The reality is that we lack the ability to independently verify the accuracy of claims by any side, and this is a significant concern considering that militaries continue to explore how to integrate AI to augment existing capabilities and develop