How AI can increase the threat of terrorism

Its use will also allow terrorist operatives almost total anonymity, as well as enhanced affordability, flexibility, and reliability.

How AI technology could be used by terrorists to significantly enhance violent attacks through automation, increased precision, and pre-determined targeting.
Sahutterstock
How AI technology could be used by terrorists to significantly enhance violent attacks through automation, increased precision, and pre-determined targeting.

How AI can increase the threat of terrorism

Whether with the invention of the wheel, electricity, or the internet, revolutionary innovations in technology have fundamentally changed the world and opened doors for world-changing development.

The innumerable benefits associated with such transformative inventions have been clear and irrefutable.

Today in 2023, the world stands at the brink of another enormous change, with artificial intelligence (or AI) entering the commercial market and set to become a predominant influence over virtually every aspect of life in the not-too-distant future.

In recent months, AI has generated considerable international attention. To some extent, this attention has focused on AI‘s potential to assist humanity in resolving some of the world’s greatest challenges – from a cure for cancer to solutions to climate change.

But much of the attention has also focused on the potential pitfalls associated with AI – as a likely catalyst for mass unemployment, to the nightmare scenarios associated with an artificial mind taking the fate of the world into its own ‘hands.’

Nash Weerasekera

Read more: When artificial intelligence becomes a nightmare

AI and terrorists

One facet that has not been considered with anywhere near the same level of attention is the prospect of terrorists exploiting AI technology as a means to significantly empower their capabilities and destructive agendas.

In modern history, terrorists have consistently been swift and often highly effective at exploiting new technology to their own advantage.

Few envisioned 20 years ago that social media platforms like Facebook and Twitter would play host not only to families sharing personal vacation photographs but also to apocalyptic terrorists plotting attacks, trading in weapons and recruiting worldwide adherents.

One facet that has not been considered enough is the prospect of terrorists exploiting AI technology as a means to significantly empower their capabilities and destructive agendas.

Yet in the space of four years (from 2013 to 2017), the Islamic State (IS) managed to use rudimentary social media platforms to empower its expansion from an Iraq-based terror group to a global terrorist organisation with as many as 100,000 members and hundreds of millions of dollars at its disposal.

Beginning in 2016, IS and other armed groups in Syria also began deploying commercial drones not just to conduct reconnaissance and to help coordinate ground attacks, but also as armed aircraft.

In fact, between 2016 and 2020, at least 440 drone attacks have been documented by non-state armed groups around the world. Of those, 433 took place in the Middle East.

In recent years, armed groups have also deployed remote-controlled vehicles and weapons, while IS is known to have experimented with the use of semi-autonomous vehicles and utilised an AI-fuelled "Caliphate Cannon" cyber weapon for several denial of service (DOS) attacks.

An SDF fighter monitors on Surveillance screens, prisoners who are accused of being affiliated with the Islamic State (IS) group, at a prison in the northeastern Syrian city of Hasakeh on October 26 2019.

There have already been deadly consequences resulting from such innovation by terrorist groups. And the potential for strategic effect has been clear.

In the past five years, armed groups have dispatched automated drone swarms towards Russia's Hmeymim airbase in western Syria; utilised suicide drones in attempts to assassinate heads of state in Iraq and Venezuela; and launched autonomous underwater drones in international waters towards Israel.

IS is known to have experimented with the use of semi-autonomous vehicles and utilised an AI-fuelled "Caliphate Cannon" cyber weapon for several denial of service (DOS) attacks.

AI — like all technology — will eventually penetrate commercial markets to such an extent that its manipulation and exploitation by malign non-state actors will simply become inevitable. Such a "democratisation" of AI is a matter of time and will raise several serious risks.

Firstly, AI technology could be used by terrorists to significantly enhance violent attacks through automation, increased precision, and pre-determined targeting. Its use will also allow terrorist operatives almost total anonymity, as well as enhanced affordability, flexibility, and reliability.

As the notorious arms control advocacy video 'Slaughterbots' demonstrated, large numbers of small, inexpensive and fully automated drones could one day be launched into a targeted area, pre-programmed to single out and target individuals based on their ethnicity, political views, gender or any number of measurable or detectable characteristics.

Ultimately, AI's integration into violent acts removes almost all of the need for human expertise, sophistication, and any of the ethical or psychological barriers that might otherwise interfere with or undermine the effectiveness of acts of violence wholly reliant on the human body and mind.

AI could also be exploited by terrorists to enable and maximise the impact of cyber attacks. By utilising machine learning, AI-driven cyber operations would be markedly more aggressive, targeted and effective, whether in DOS, malware, ransomware, phishing or even in improving a terrorist's malign use of otherwise semi-legitimate crypto trading to raise funds.

Shutterstock
AI could also be exploited by terrorists to enable and maximise the impact of cyber attacks.

As governments around the world increasingly invest in building 'smart cities' and as twenty-first-century life becomes ever-interconnected by technology, the opportunities for crippling AI-coordinated cyber attacks rise significantly.

AI technology could be used by terrorists to significantly enhance violent attacks through automation, increased precision, and pre-determined targeting. It will also give terrorist operatives almost total anonymity, as well as enhanced affordability, flexibility, and reliability.

Another arena for possible AI exploitation by terrorists lies in the technology's ability to generate highly effective deep fakes and disinformation – both as tools to destabilise targeted societies but also in order to disseminate propaganda and recruit.

Terror groups like IS and al-Qaeda already make use of automated bots to manage and share propaganda and to reproduce online accounts in response to security shutdowns. With AI-induced improvements, terrorists' online effectiveness would increase significantly.

In history, the advent of gunpowder and nuclear weapons have been described as the first two revolutions in warfare. AI promises to the third – and unlike gunpowder and nuclear weapons, AI will be commercial and led by the private sector, making it far more threatening from day one.

Putting technological advancements aside, the challenges and threats posed by terrorists continue to persist, as terror groups survive and adapt, and take advantage of new areas of opportunity.

The most recent wave of terrorism since 9/11 was fuelled in large part by the proliferation of internet access and significant declines in the cost of international travel. However, the introduction of AI into the commercial market and its eventual affordability and ease of use threaten to inflate the terror threat into something altogether more dangerous.

Governments should already be working on anti-AI defences, whilst planning ahead for how best to regulate the technology enough to at least minimise the extent to which malign actors can try to exploit it for their deadly agendas in the future.

font change

Related Articles