The promise and danger of AI in the military

Militaries around the world will face competitive pressures to increase their reliance on AI, but to avoid catastrophe, its use must be guided by laws, rules, and norms

Grace Russell

The promise and danger of AI in the military

There is no question that Israeli intelligence agencies have rebounded after their catastrophic failure on 7 October 2023 following the Hamas terrorist attack. Much improved intelligence has helped Israel track down and kill the top leaders of Hamas and Hezbollah, along with several other senior commanders and officials of these militant groups.

How Israel has been able to upgrade its intelligence abilities in such a short period of time will be a topic of discussion among professionals for many years to come. One factor, or capability, that should be included in this conversation is artificial intelligence (AI) and how Israel has leveraged it in its intelligence cycle to achieve its military goals in Gaza and elsewhere.

I recently presented on the subject of AI and defence at a conference jointly held by Trends Research & Advisory and the Research Center for Advanced Science and Technology at the University of Tokyo. Before my arrival, I thought I had developed a decent understanding of the promise of AI in the field of defence. I had educated myself about its multiple uses through research and personal engagements with staff in US Central Command (CENTCOM), including its chief technology officer Schuyler Moore.

I had seen how the US Fifth Fleet was employing AI in its challenging maritime security mission. But in Tokyo, I quickly learned from my Japanese colleagues (who are real scientists) that we are barely scratching the surface of what AI can do in our everyday lives—particularly in the world of defence. It’s as exciting as it is scary.

Case study

Let’s go back to the Israel and Fifth Fleet examples and then place them in a broader context. Soon after 7 October, Israel began to rely on AI to generate Hamas targets, and this process accelerated as the war dragged on and the political imperative to downgrade the group’s military capabilities gained urgency.

Grace Russell

AI has had its fingerprints all over the Israeli military’s target development. Through a programme called Lavender, the Israeli military’s Unit 8200 has been able to generate a large database of individuals who could be members of Hamas and other militias. Its precision is by no means perfect, but according to Lavender’s operators, it has achieved a 90% accuracy rate. In addition to Lavender, the Israelis have used another AI-based decision support system called the Gospel, which provides recommendations on physical infrastructure rather than individuals.

Israel has also used self-piloting drones for close-quarters indoor combat to target individuals in Gaza and Lebanon and, after engagement, to confirm their killing. The latest example of this type of AI-assisted engagement is Israel’s use of a drone that flew into a building in Rafah where Hamas leader Yahya Sinwar allegedly was sitting on a chair and shortly after killed.

That AI has been able to generate thousands of targets for the Israeli military is a game changer. But how Israeli intelligence officers and their superiors have used that information is profoundly disturbing and potentially contravenes international law. It is one thing to sit on a treasure trove of information, which AI has helped deliver, and another altogether to use it properly and in accordance with international norms and regulations. Israel’s compliance with international humanitarian law in Gaza and Lebanon is suspect at best.

That said, it’s silly to blame AI for Israel’s killing of such a large number of civilians in Gaza and Lebanon. It all starts with human decisions, and Israel’s policy, especially in the earlier phases of the war in Gaza, has been to relax the targeting requirements. AI-assisted attacks are less labour-intensive, whereas “more regular” intelligence operations require more human and financial resources as well as legal consultation.

Israel’s indiscriminate bombing risked higher numbers of civilian casualties, but it was all approved by the Israeli authorities (although that permission has ebbed and flowed throughout the war depending on international condemnation and US pressure).

That AI has generated thousands of targets for the Israeli military is a game changer. But how it has used that information is profoundly disturbing.

But beyond Israel's fight against Hamas and Hezbollah, the automation of aspects of collection and processing is here to stay. Look at US Naval Forces Central Command, the naval branch of CENTCOM, and how it has used AI to secure regional waters under its responsibility. Task Force 59 has been at the forefront of this effort, enhancing its situational awareness through AI-assisted unmanned systems by creating and updating in real-time a common operational picture derived from multiple sensors.

Technical, legal and moral challenges

Militaries around the world will increasingly rely on AI, both in defensive and offensive operations, to meet their goals. But several challenges lie ahead, some technical, others legal and moral.

As impressive as AI's abilities are, they are not without flaws (which developers will continue to address, just like any other technology). For once, AI is not as smart as it seems. Say you give AI the task of processing an image. If the task slightly diverges from the training set you developed for AI, it will struggle or fail to accurately identify the content. All it takes is poor lighting, an obtuse angle, or something partially obscured to confuse the module.

Those weaknesses can be remedied through more comprehensive and specific training for the software, but the point here is that AI will not go beyond what it is specifically asked to do. Human authorship and instructions are still critical.

As hard-working as AI is, it can't multitask. For example, in intelligence operations, a human can identify a target, decide which weapon system is most suitable for engagement, predict its path, and then finally strike the target. One AI system cannot on its own do all these things simultaneously. A combination of AIs could, whereby separate models are performing distinct tasks, but such simultaneity is technologically challenging and hugely expensive. We're not there yet.

Said Khatib/AFP
An Israeli quadcopter drone flying over Palestinian demonstrations near the border with Israel east of Khan Yunis in the southern Gaza in 2018.

AI also can neither put things in context nor distinguish between correlation and causation. Only humans can. For example, that image that AI is trying to recognise? It's a mystery to the technology. All AI can spot is textures and gradients of the image's pixels, not knowing what those are or mean. Put those same gradients in another setting, and AI will incorrectly identify portions of the picture. AI is good at identifying patterns but can't explain why they happen and their effects. What's logical to an AI model could be perfectly illogical or simply irrelevant to a human.

How AI makes decisions— linked to the above challenge—is an important weakness and potential problem. Much of what occurs inside an AI system is a black box, and there is very little that a human can do to understand how the system makes its decisions. This is a major issue for high-risk systems such as those that make engagement decisions or whose output may be used in critical decision-making processes. Auditing a system and learning why it made a mistake is legally and morally important.

Growing reliance

Yet despite these imperfections, challenges, and uncertainties, militaries around the world will face competitive pressures to increase their reliance on AI. The desire to gain decision advantage in peace, crisis and war situations will always be there. Nobody wants to fall behind, especially in this environment of intense great power competition.

Through better education and training, some of AI's weaknesses and risks could be remedied and mitigated. In the military domain, there has to be greater human supervision and intervention to reduce the likelihood and contain the consequences of AI failures. Humans must define standards and requirements for AI systems used in decision-making contexts. They also shouldn't overburden AI with tasks that are better suited to human judgment, including interpretation and analysis of, for example, an adversary's intentions.

The mission of AI education cannot be limited to individual organisations and countries. It has to be an international process and conversation. Like all technologies before it, AI has to be guided by laws, rules, and norms to avoid catastrophic consequences. A global summit would be a good start.

font change

Related Articles