Israel’s use of AI in its war on Gaza is part of a global trend

AI technology has had a huge impact on warfare in recent years, yet the systems are fallible, and concerns are growing, not least in terms of ethics and legality

An Israeli soldier uses an unmanned surveillance drone to monitor Palestinians in the occupied West Bank city of Hebron.
HAZEM BADER / AFP
An Israeli soldier uses an unmanned surveillance drone to monitor Palestinians in the occupied West Bank city of Hebron.

Israel’s use of AI in its war on Gaza is part of a global trend

The prevalence and effect of Artificial Intelligence (AI) technologies have been felt in every aspect of life in recent years, and their application on the battlefield is no different, fast transforming the way wars are fought.

The frontlines of conflicts have become live laboratories for those seeking to use algorithms and datasets to defeat their enemies and protect their troops. In this, Israeli forces are among the most expansive experimenters, using AI in Gaza, Lebanon, and elsewhere.

Recent reports from both Western and Israeli media highlight the country’s rapid and advanced deployment of AI systems, with the Israeli military increasingly leaning on sophisticated tools to support its campaigns, prompting ethical and legal concerns.

According to investigations by The New York Times and other media outlets, Israel employs AI capable of analysing images and texts, intercepting electronic communications, and even making semi-autonomous decisions in combat, with Israel’s secretive Unit 8200 playing a pivotal role.

Processing data

Known as ‘The Studio,’ Unit 8200 is the electronic arm of Israel’s Military Intelligence Directorate (Aman). It employs advanced technology and specialists whose programmes monitor and process vast amounts of data from conflict zones. It is the equivalent of the US National Security Agency or Britain’s GCHQ, and is the largest single military unit in the Israeli army.

Unit 8200 engages in signals intelligence, code-breaking, and cyber warfare, sometimes working in partnership with private sector firms. Processing millions of communications daily, it extracts intelligence from this vast data stream for operations that are closely coordinated with Sayeret Matkal—a special forces unit specialising in reconnaissance, counterterrorism, and special operations, not dissimilar to Britain’s Special Air Service (SAS).

Russian President Vladimir Putin has declared that "whoever leads in AI will rule the world," highlighting the fierce global competition to dominate the field

Given that Israel controls the telecommunications infrastructure in the West Bank and Gaza, much of Unit 8200's data comes from Palestinian territories. The phone and mobile networks in these territories are effectively integrated into Israeli systems, making surveillance both routine and highly efficient.

Unit 8200 is also known to be actively engaged in Iran. The notorious Stuxnet virus and variants, which disrupted Tehran's nuclear programme for several years, are believed to have originated in 'The Studio,' whose surveillance and wiretapping are assumed to be active across the entire region, including most Arab nations.

Faces and dialect

In March 2024, The New York Times reported that Unit 8200 had deployed facial recognition technologies to identify and monitor Palestinians in Gaza. These systems are capable of analysing images, even when blurred or partially obscured. Yet there have been instances of wrongful detention of Palestinians owing to their misidentification by this emerging technology, raising concerns over its reliability.

Other reports suggest that Israel's temporary checkpoints along the Gaza border are fitted with AI systems that use advanced cameras to identify individuals and detect any anomalies, yet their fallibility has also been reported, with one misidentifying a water pipe as a firearm, showing how oversight by human analysts is still required.

The New York Times also reported that one of the unit's most recent initiatives was the development of a large language model (LLM) designed to comprehend texts written in various Arabic dialects, addressing a longstanding intelligence gap created by the limited availability of dialectal data compared with Modern Standard Arabic.

Constructing this LLM was a huge undertaking, beginning with the collection of vast quantities of fragmented dialectal data (unlike Modern Standard Arabic, Arabic dialects have historically lacked comprehensive academic documentation) from sources such as social media platforms, user comments, TV and film scripts, and snippets of everyday conversation.

JACK GUEZ / AFP
A picture taken from southern Israel near the border with the Gaza Strip on December 6, 2023, shows an Israeli soldier carrying a drone.

Since the data often included typographical errors, emojis, and other non-standard symbols, it required 'cleaning' to get to the colloquial character of the text. Developers then adapted existing models or designed new ones capable of accommodating the diversity of Arabic dialects, treating each dialect as a distinct language or as a branch within a broader linguistic network. These employed systems are capable of multilingual learning.

The subsequent training phase needed vast computational power and considerable financial investment. It was then tested in real-world scenarios, with native speakers evaluating its ability to understand humour, emotion, and everyday discourse. The initial version was developed and thereafter constantly refined—especially important given the fluid and rapidly evolving nature of dialectal Arabic.

After assassinating Hezbollah Secretary-General Hassan Nasrallah in September, the Israeli military reportedly used AI to monitor Arab public sentiment across social media platforms to assess the likelihood of a military response. Unverified reports even suggest that Israeli AI can now forecast reactions before they materialise.

Raising dilemmas

Flaws notwithstanding, these AI tools have supercharged Israel's intelligence analysis process, offering a crucial time advantage on the battlefield, yet the integration of AI into military operations raises big dilemmas.

While Israel maintains that it operates according to strict legal standards, the opacity of its operations and the absence of any robust civilian oversight have alarmed international observers and human rights organisations.

China is due to spend $150bn on AI, significantly more than the US, the UK, or Russia, whereas Saudi Arabia plans to establish a $40bn AI investment fund.

Many worry that Israel's over-reliance on automated systems could lead to the delegation of life-and-death decisions to machines, bypassing full human judgment and potentially resulting in serious breaches of international humanitarian law.

As AI's militarisation accelerates, some analysts have drawn comparisons to the nuclear arms race of the Cold War, with today's arms race being automated weapon systems. The pace of change is unprecedented, with the United Nations and international law struggling to keep pace with developments. As such, no comprehensive regulatory framework for the use of such systems currently exists.

Falling on deaf ears

Calls for a comprehensive ban on the development of autonomous weapons have been made for over a decade. For instance, in 2017, the Future of Life Institute published an open letter to the UN. Signed by 126 chief executives and founders of AI and robotics companies, it urged immediate international action to avert an arms race in autonomous weapons. But these appeals have fallen on deaf ears.

Aside from Article 26 of the International Covenant on Civil and Political Rights, which addresses AI solely in the context of privacy, there is no comprehensive international legal framework governing the use of AI in armed conflict.

Definitions of what constitutes 'autonomous weapons' vary widely. Back in 2011, the UK's Ministry of Defence described them as systems "capable of understanding higher-level intentions and making decisions to achieve objectives with a level of understanding comparable to that of a human".

In 2023, the US Department of Defence described an autonomous weapon as "a system that, once activated, can select and engage targets without further intervention by a human operator". Examples of such systems include air defence platforms like Israel's Iron Dome, Germany's MANTIS, or Sweden's LEDS-150.

Shutterstock
The Saudi Council of Ministers approved the establishment of the International Centre for AI Research and Ethics in Riyadh in July 2023. The Kingdom is committed to fostering responsible AI development.

Race to the bottom?

The real challenge lies in formulating future definitions that can keep pace with advances, including AI systems that replicate human cognitive processes and make complex decisions autonomously. Yet despite the urgency, there is no serious multilateral willingness to reach a unified position on AI military regulation.

The UN Convention on Certain Conventional Weapons (CCW) includes an amended protocol, overseen by a Group of Governmental Experts, which meets annually to address such concerns. In May 2023, it produced a draft report stressing the need for human oversight of autonomous weapons, but did not establish binding regulatory measures.

Russian President Vladimir Putin has declared that "whoever leads in AI will rule the world," highlighting the fierce global competition to dominate the field. China is due to spend $150bn on AI, significantly more than the US, the UK, or Russia, whereas Saudi Arabia plans to establish a $40bn AI investment fund.

Alongside the prevalence of autonomous software has been the prevalence of autonomous unmanned aerial vehicles (drones), which are primarily used to gather data and intelligence, and to attack enemy targets. Between 2010-20, the US conducted around 14,000 drone strikes in Afghanistan.

These, too, require human oversight. To date, no fully autonomous weapon systems are known to have been deployed in combat without meaningful human oversight, but deepening incorporation of AI into military operations suggests this is a case of 'when', not 'if'. It may even be over Gaza.

font change