OpenAI has taken a significant step in aligning its technology with U.S. national security interests, entering into a strategic partnership with Anduril Industries. This collaboration marks a notable pivot for the AI giant, which has historically maintained a cautious stance on military applications. The partnership, announced by Anduril, will integrate OpenAI’s advanced AI models into Anduril’s defence systems, aiming to enhance situational awareness and reduce the burden on human operators.
Anduril, a defence startup backed by Oculus VR co-founder Palmer Luckey, specializes in advanced military technologies, including sentry towers, communications jammers, military drones, and autonomous submarines. The company already supplies anti-drone technology to the U.S. government and has recently secured a $100 million contract with the Pentagon’s Chief Digital and AI Office to develop and test unmanned fighter jets.
OpenAI clarified to the Washington Post that its partnership with Anduril will focus on systems designed to defend against pilotless aerial threats, notably avoiding direct association with applications that could result in human casualties. Both companies emphasize that this collaboration is crucial for maintaining U.S. technological parity with China, a goal echoed in recent U.S. government initiatives to bolster AI advancements.
“OpenAI builds AI to benefit as many people as possible, and supports U.S.-led efforts to ensure the technology upholds democratic values,” wrote OpenAI CEO Sam Altman. “Our partnership with Anduril will help ensure OpenAI technology protects U.S. military personnel, and will help the national security community understand and responsibly use this technology to keep our citizens safe and free.”
This partnership comes on the heels of OpenAI’s quiet removal of policy language that previously banned high-risk applications, including military and warfare uses. An OpenAI spokesperson explained that while the company prohibits the use of its tools for harm, there are national security applications that align with its mission. For instance, OpenAI is already collaborating with DARPA to develop cybersecurity tools to protect critical infrastructure.
Over the past year, OpenAI has reportedly been actively pitching its services to various U.S. military and national security offices, supported by a former security officer from Palantir, a software company and government contractor. This shift is part of a broader trend within the tech industry, as companies like Anthropic and Palantir also explore military applications of their AI technologies. Anthropic, known for its AI model Claude, recently partnered with Amazon Web Services to offer its models to defence and intelligence agencies, promoting them as tools for decision-making in classified environments.
Recent speculation suggests that President-elect Donald Trump is considering Palantir’s chief technology officer, Shyam Shankir, for a leading role in engineering and research at the Pentagon. Shankir has been vocal about the need for the Department of Defense to streamline its technology acquisition process, advocating for the adoption of commercially available technology over traditional defence contractors.
As OpenAI and other tech giants navigate the complexities of military applications, the ethical and strategic implications of these partnerships will continue to shape the future of AI in national security. This collaboration not only underscores the growing intersection of technology and defence but also highlights the critical role of AI in maintaining global security dynamics.

