There is no question that Israeli intelligence agencies have rebounded after their catastrophic failure on 7 October 2023 following the Hamas terrorist attack. Much improved intelligence has helped Israel track down and kill the top leaders of Hamas and Hezbollah, along with several other senior commanders and officials of these militant groups.
How Israel has been able to upgrade its intelligence abilities in such a short period of time will be a topic of discussion among professionals for many years to come. One factor, or capability, that should be included in this conversation is artificial intelligence (AI) and how Israel has leveraged it in its intelligence cycle to achieve its military goals in Gaza and elsewhere.
I recently presented on the subject of AI and defence at a conference jointly held by Trends Research & Advisory and the Research Center for Advanced Science and Technology at the University of Tokyo. Before my arrival, I thought I had developed a decent understanding of the promise of AI in the field of defence. I had educated myself about its multiple uses through research and personal engagements with staff in US Central Command (CENTCOM), including its chief technology officer Schuyler Moore.
I had seen how the US Fifth Fleet was employing AI in its challenging maritime security mission. But in Tokyo, I quickly learned from my Japanese colleagues (who are real scientists) that we are barely scratching the surface of what AI can do in our everyday lives—particularly in the world of defence. It’s as exciting as it is scary.
Case study
Let’s go back to the Israel and Fifth Fleet examples and then place them in a broader context. Soon after 7 October, Israel began to rely on AI to generate Hamas targets, and this process accelerated as the war dragged on and the political imperative to downgrade the group’s military capabilities gained urgency.
AI has had its fingerprints all over the Israeli military’s target development. Through a programme called Lavender, the Israeli military’s Unit 8200 has been able to generate a large database of individuals who could be members of Hamas and other militias. Its precision is by no means perfect, but according to Lavender’s operators, it has achieved a 90% accuracy rate. In addition to Lavender, the Israelis have used another AI-based decision support system called the Gospel, which provides recommendations on physical infrastructure rather than individuals.
Israel has also used self-piloting drones for close-quarters indoor combat to target individuals in Gaza and Lebanon and, after engagement, to confirm their killing. The latest example of this type of AI-assisted engagement is Israel’s use of a drone that flew into a building in Rafah where Hamas leader Yahya Sinwar allegedly was sitting on a chair and shortly after killed.
That AI has been able to generate thousands of targets for the Israeli military is a game changer. But how Israeli intelligence officers and their superiors have used that information is profoundly disturbing and potentially contravenes international law. It is one thing to sit on a treasure trove of information, which AI has helped deliver, and another altogether to use it properly and in accordance with international norms and regulations. Israel’s compliance with international humanitarian law in Gaza and Lebanon is suspect at best.
That said, it’s silly to blame AI for Israel’s killing of such a large number of civilians in Gaza and Lebanon. It all starts with human decisions, and Israel’s policy, especially in the earlier phases of the war in Gaza, has been to relax the targeting requirements. AI-assisted attacks are less labour-intensive, whereas “more regular” intelligence operations require more human and financial resources as well as legal consultation.
Israel’s indiscriminate bombing risked higher numbers of civilian casualties, but it was all approved by the Israeli authorities (although that permission has ebbed and flowed throughout the war depending on international condemnation and US pressure).