Why the Pentagon is turning to AI startups

Artificial intelligence is offering cutting-edge solutions that slow-moving legacy contractors can't seem to keep up with

Nash Weerasekera

Why the Pentagon is turning to AI startups

In December 2024, OpenAI—one of the world’s leading AI firms—announced a partnership with Anduril, a defence technology firm specialising in autonomous systems of the kind employed in missiles and drones. They are working together on Anduril’s system for defending against drone strikes.

Specifically, it is developing a system that uses a swarm of small autonomous drones operating collaboratively on missions. These drones are controlled through an interface powered by a large language model (LLM), which translates language commands into actionable instructions that both pilots and drones can execute seamlessly.

Anduril hopes the technology from OpenAI will help its drones better detect, assess, and respond to potentially lethal aerial threats in real time. Before its contract with OpenAI (whose ChatGPT is perhaps the world’s best-known LLM), it had been using open-source language models for testing purposes.

This is just one example of how—in an era of escalating geopolitical competition and technological rivalries—Artificial Intelligence (AI) has emerged as a pivotal force shaping the future of warfare and enhancing the defence capabilities of major global powers.

Tech enters the battle

No longer confined to optimising industrial processes or improving digital services, AI has become a cornerstone of both offensive and defensive military strategies, heralding a new arms race. Powers such as the United States, China, and Russia are racing to harness AI’s potential to redefine the nature of warfare and military deterrence.

Amid this rapid evolution, some big American tech firms have made the controversial decision to give the US armed forces and its military contractors advanced AI technologies for military applications. This marks a significant shift in the companies’ strategic direction and raises profound questions about the future role of AI in warfare.

AFP
An Anduril Ghost X drone is carried by a US soldier at the Hohenfels training area in southern Germany.

Other big tech giants are involved, too, with Google’s owner Alphabet, Amazon, and Microsoft all significantly increasing their investments in AI startups. Notably, Google recently removed its long-standing pledge not to develop AI for surveillance or weapons systems from its website.

When asked about it, Google emphasised the importance of tech firms and governments working together to ensure that AI technologies are developed responsibly while reaffirming its commitment to safeguarding individuals, strengthening national security, minimising unintended consequences, and preventing unfair biases.

The firm is already under scrutiny for its role in providing cloud services to the US military, a decision that led to protests and resignations from some of its employees. Google says its technologies are not designed to inflict harm, but the head of the Pentagon’s AI division recently disclosed that some of the firm’s AI models may be contributing to US military operations.

This evolving landscape has led to a new model of bilateral partnerships emerging, whereby smaller tech firms collaborate with major corporations to develop advanced AI-powered military solutions, yet whether this model is sufficient to secure the long-term strategic superiority of the US remains unclear.

It certainly marks a change in stance. Only a few years ago, many AI researchers in Silicon Valley vehemently opposed working with the military. In 2018, thousands of Google staff protested its involvement in Project Maven, a Pentagon initiative aimed at integrating AI into military intelligence and drone operations. The backlash forced Google to withdraw from the project.

Following Russia's invasion of Ukraine, however, perspectives began to shift. With AI increasingly regarded by states as a transformative technology with critical geopolitical implications, tech firms grew more receptive to military collaboration, with contracts offering substantial financial incentives—an attractive source of revenue for AI firms that need extensive capital investment in research and development.

Only a few years ago, many AI researchers vehemently opposed working with the military, but things have now changed considerably

The Last Supper

In the autumn of 1993, senior executives from America's big defence companies gathered for a highly confidential dinner at the Pentagon hosted by then-Secretary of Defence Les Aspin. Later dubbed 'the Last Supper,' it set in motion a wave of mergers and acquisitions, fundamentally reshaping the US military-industrial sector.

The Cold War had ended, and the US needed to cut its defence budget significantly, given the much-vaunted "peace dividend," redirecting money towards infrastructure projects and healthcare. From 1991-96, US defence spending fell by more than 15%, which put a strain on the defence firms, who relied on government contracts.

Seated next to Aspin at the dinner was Norman Augustine, boss of Martin Marietta, who began by asking his host why they were meeting. "You'll know in 15 minutes," said Aspin. "And you probably won't like what you hear." Aspin and his deputy William Perry then led a sobering discussion. Martin Marietta would later merge with Lockheed to form Lockheed Martin, a firm worth around $140bn at the end of 2024.

Perry's data was stark and pointed to a drastic reduction in the number of defence firms that the US government could sustain due to budget cuts. He explained that in key sectors—such as fighter jets—only three companies could be supported, while in others—such as tanks—only one would survive. 

It became clear that the government would not step in to save struggling firms. Rather, the market would determine the winners. Augustine later summarised the message as: "You have to merge your companies, or you're out of the market."

Alongside Martin Marietta's merger with Lockheed, Boeing bought McDonnell Douglas (creating a company today worth $135bn), while Northrop bought Grumman to create Northrop Grumman (valued at almost $80bn at the end of 2024). Numerous smaller defence firms were absorbed into larger entities. By the end of the decade, the number of big defence contractors had gone from 15 to five.

The US government had subtly encouraged consolidation, with the Department of Justice nodding them through on national security, but the shift came at a cost. Bureaucratic compliance was prioritised over innovation, risk-taking, and technological breakthroughs. Arguments between engineers and bean-counters were increasingly won by the latter.

Shutterstock

With fixed profit margins, there were fewer incentives to seek cost-effective solutions or technological advances. The industry became less dynamic and adaptable. Today, the US government is still the largest client for AI startups, particularly those with military applications. Alongside traditional defence giants, these emerging companies compete for lucrative Pentagon contracts.

The five biggest contractors—Lockheed Martin, Raytheon, Northrop Grumman, General Dynamics, and Boeing—still dominate an industry built around long-term financial stability. Unlike the volatile commercial sector, military contracts provide funding over several years, which keeps the firms solvent. With the increasing importance of AI, will the tech firms upend the industry much like the Last Supper?

Follow the money

In April 2023, a Stanford University study revealed a sharp increase in US federal spending on AI throughout 2022. By June 2023, the House Appropriations Committee was prioritising legislation to integrate AI into an expanding array of military and government programmes. By November 2023, both the Department of Defence (DoD) and the State Department shifted from experimentation to implementation.

US federal spending on AI went from $261mn in 2022 to $675mn in 2023, while the potential value of AI-related contracts surged to more than $4.5bn. The number of DoD contracts related to AI jumped from 254 to 657. Worth around $4.3bn, they represented 95% of all federal AI spending. By contrast, agencies such as NASA and the Department of Health spent just a fraction of that on AI.

This surge in funding has reshaped the defence AI market. The number of companies securing contracts worth $10mn or more went from four to 205. The market remains highly fragmented, however. Most of these AI technology firms must operate within narrow contractual scopes tied to a single funding agency or department.

The Pentagon has launched a series of ambitious initiatives, including the Multiple Award Contract for Artificial Intelligence, projected to be the largest AI-related government contract worth $15bn over the next decade. It aims to develop an advanced data analytics platform capable of processing vast amounts of military and intelligence data with unparalleled speed and precision.

Another major project is Multiplier, focusing on mass-producing low-cost autonomous drones powered by AI. Designed to enhance operational efficiency, the initiative seeks to streamline command and guidance operations while fortifying drone networks against electromagnetic attacks.

In 2023, the value of AI-related contracts issued by the US federal government was $4.5bn, of which the Department of Defence issued $4.3bn

Swift yet hindered

Given the scale and complexity of these initiatives, together with the breakneck pace of AI innovation, the traditional US defence contracting system is proving inadequate and bureaucratic, so the DoD is increasingly turning to AI startups that offer cutting-edge solutions, bypassing the slow-moving mechanisms of legacy contractors.

With their ability to deliver advanced technological solutions quickly and easily, AI startups are gradually securing a larger share of government contracts as the US seeks to maintain dominance in the autonomous weapons market. 

According to a Defence Advanced Research Projects Agency (DARPA) report, 70% of the agency's research now revolves around AI and machine learning. Matt Turk, DARPA's deputy director, said startups and the private sector were the primary drivers of AI innovation in defence applications.

Yet despite the tech firms' agility, federal acquisition regulations—shaped by legislative mandates and executive orders—still hinder the rapid adoption of AI. Designed to support national priorities (such as promoting US-made products), they nevertheless slow the process, which puts a strain on private-sector engagement and delays the military's ability to integrate cutting-edge AI technology.

As a result, total spending on startups remains relatively small. From a $411bn defence budget in the last fiscal year, startups accounted for just 1% of contracts. The US military is now exploring new contracting models to foster a more competitive and innovation-friendly AI ecosystem. 

One such initiative is the Interoperable Government Data Warehouses and Applications Challenge, launched in July 2023. It seeks to streamline AI procurement for the US military while expanding AI deployment across different branches of the armed forces.

Shutterstock

Logistical challenges 

With its ability to learn, adapt, and perform complex tasks almost instantly, AI could revolutionise military operations, but developing and deploying these systems is far from easy. AI models require extensive training on vast datasets to operate with precision. Fine-tuning and optimisation is then achieved through multiple test phases before they can be integrated into real-world military environments.

To maximise AI's effectiveness, military forces must establish large-scale computing infrastructure within operational zones. This allows real-time analysis of data collected from ground sensors, drones, and satellites. The closer the data processing centres are to the battlefield, the more responsive and effective military AI systems become. 

Reduced reliance on cloud-based networks (which are vulnerable to electronic warfare and cyberattacks) is crucial for ensuring uninterrupted and secure operations, yet balancing the need for high-speed, large-scale data processing with network security and confidentiality remains a formidable challenge.

As AI integration in military operations expands, so do cybersecurity risks. One of the primary challenges, according to the National Security Technology Accelerator (NSTXL), is securing communication between peripheral AI-powered systems and central command networks to prevent cyber intrusions and system breaches.

This risk is compounded by the US military's reliance on adaptive AI, which necessitates regular updates to counter evolving threats. Cybersecurity experts warn that AI data theft and the decryption of classified military communications pose critical security vulnerabilities, requiring the urgent development of advanced encryption protocols to safeguard sensitive information.

Critical vulnerabilities

A recent Pentagon report revealed that while defence chiefs expressed confidence in the security of their systems, rigorous testing exposed critical vulnerabilities. Lawmakers expressed dismay over the extent to which modern weapons systems could be compromised by cyberattacks, especially for a digitally-reliant military.

To maximise AI tools, militaries need large-scale computing infrastructure within operational zones for real-time analysis of data from sensors, drones, and satellites

Yet as weapon systems grow in complexity and sophistication, so too do their exploitable weaknesses, illustrated by Russia's cyberattacks against Ukraine, crippling the Ukrainian army's command and control networks and targeting critical infrastructure like military communications and energy grids. 

AI has the power to improve the accuracy of missile systems and support real-time decision-making, but its dependence on data integrity introduces a significant vulnerability. For instance, if AI-guided missiles or drones are fed manipulated or compromised data, weapons could be turned on their wielder or on civilians.

Once AI technology is deployed on the battlefield, soldiers need to learn how to use it without becoming dependent on it. They also need to trust its security. Exploited or breached, AI-led systems could be used to disrupt operations or steal classified intelligence.

Ethically, AI in warfare raises some dilemmas. While it had the potential to reduce civilian casualties by improving target accuracy, it also introduces lethal autonomous weapons systems (LAWS), whereby machines—not humans—determine the use of deadly force. Many argue that these systems need better oversight.

As AI continues to redefine the battlefield, it will continue to determine a combatant's fortunes in war, and states will continue to scramble their technological resources to stay one step ahead, companies will continue to scramble for contracts, and politicians will continue to scramble to create the laws and regulation needed to shape the future.

font change