In December 2024, OpenAI—one of the world’s leading AI firms—announced a partnership with Anduril, a defence technology firm specialising in autonomous systems of the kind employed in missiles and drones. They are working together on Anduril’s system for defending against drone strikes.
Specifically, it is developing a system that uses a swarm of small autonomous drones operating collaboratively on missions. These drones are controlled through an interface powered by a large language model (LLM), which translates language commands into actionable instructions that both pilots and drones can execute seamlessly.
Anduril hopes the technology from OpenAI will help its drones better detect, assess, and respond to potentially lethal aerial threats in real time. Before its contract with OpenAI (whose ChatGPT is perhaps the world’s best-known LLM), it had been using open-source language models for testing purposes.
This is just one example of how—in an era of escalating geopolitical competition and technological rivalries—Artificial Intelligence (AI) has emerged as a pivotal force shaping the future of warfare and enhancing the defence capabilities of major global powers.
Tech enters the battle
No longer confined to optimising industrial processes or improving digital services, AI has become a cornerstone of both offensive and defensive military strategies, heralding a new arms race. Powers such as the United States, China, and Russia are racing to harness AI’s potential to redefine the nature of warfare and military deterrence.
Amid this rapid evolution, some big American tech firms have made the controversial decision to give the US armed forces and its military contractors advanced AI technologies for military applications. This marks a significant shift in the companies’ strategic direction and raises profound questions about the future role of AI in warfare.
Other big tech giants are involved, too, with Google’s owner Alphabet, Amazon, and Microsoft all significantly increasing their investments in AI startups. Notably, Google recently removed its long-standing pledge not to develop AI for surveillance or weapons systems from its website.
When asked about it, Google emphasised the importance of tech firms and governments working together to ensure that AI technologies are developed responsibly while reaffirming its commitment to safeguarding individuals, strengthening national security, minimising unintended consequences, and preventing unfair biases.
The firm is already under scrutiny for its role in providing cloud services to the US military, a decision that led to protests and resignations from some of its employees. Google says its technologies are not designed to inflict harm, but the head of the Pentagon’s AI division recently disclosed that some of the firm’s AI models may be contributing to US military operations.
This evolving landscape has led to a new model of bilateral partnerships emerging, whereby smaller tech firms collaborate with major corporations to develop advanced AI-powered military solutions, yet whether this model is sufficient to secure the long-term strategic superiority of the US remains unclear.
It certainly marks a change in stance. Only a few years ago, many AI researchers in Silicon Valley vehemently opposed working with the military. In 2018, thousands of Google staff protested its involvement in Project Maven, a Pentagon initiative aimed at integrating AI into military intelligence and drone operations. The backlash forced Google to withdraw from the project.
Following Russia's invasion of Ukraine, however, perspectives began to shift. With AI increasingly regarded by states as a transformative technology with critical geopolitical implications, tech firms grew more receptive to military collaboration, with contracts offering substantial financial incentives—an attractive source of revenue for AI firms that need extensive capital investment in research and development.