Palmer Luckey, CEO of Anduril, has sparked a fierce debate by defending the use of artificial intelligence in life-or-death military decisions. In an interview with Fox News Sunday, Luckey argued that the high-stakes nature of warfare necessitates the application of the best available technology, including AI, to minimise collateral damage and maximise certainty.
“When it comes to life and death decision-making, I think that it is too morally fraught an area, it is too critical of an area, to not apply the best technology available to you, regardless of what it is,” Luckey said. “Whether it’s AI or quantum, or anything else. If you’re talking about killing people, you need to minimise the amount of collateral damage. You need to be as certain as you can in anything that you do.”
Luckey’s stance underscores a growing trend among defence tech startups, including Anduril, which are rapidly developing autonomous AI weapons and tools for global conflicts. Critics, however, argue that the technology is still too immature and untrustworthy for environments where human lives are at stake.
Luckey dismissed concerns about the moral implications of using AI in warfare, stating, “To me, there’s no moral high ground in using inferior technology, even if it allows you to say things like, ‘We never let a robot decide who lives and who dies.’”
During the interview, Luckey also explained his motivation for founding Anduril. He expressed a desire to shift talent from less critical tech sectors—such as advertising, social media, and entertainment—to defence and national security problems. “I wanted to get people out of the tech industry, working on problems that I thought were not so important, and put them to work on defence problems, national security problems. Problems that really matter,” he said.
The rapid advancement of technology is transforming military operations, from administrative tasks to field operations. Drones, in particular, have become pivotal, enabling new defence companies to secure significant government funding and contracts. Under the administration of Donald Trump, which has prioritised AI and shown interest in nuclear weapons testing, defence technology businesses have thrived.
In April, Luckey compared the development of AI in warfare to the opening of Pandora’s box, suggesting that the technology is already in use and cannot be undone. “I’ll get confronted by journalists who say, ‘Oh, well, you know, we shouldn’t open Pandora’s box.’ And my point to them is that Pandora’s box was opened a long time ago with anti-radiation missiles that seek out surface air missile launchers,” he noted.
Founded in 2017, Anduril Industries is at the forefront of developing self-operating systems to modernise the US military. The company uses AI software called Lattice to power its advanced technologies in spy equipment, flying vehicles, and autonomous weapons. Before Anduril, Luckey founded Oculus VR in 2012, which was sold to Facebook (now Meta) for $2 billion two years later.
In February, Anduril announced its involvement in a $22 billion deal between Microsoft and the Army, which was approved by the Defence Department in April. The partnership involves creating specialised wearable devices using advanced virtual reality technology for soldiers. In October, the company introduced EagleEye, a system that integrates mission command and AI directly into a warfighter’s helmet.
As the defence sector continues to evolve, Luckey’s comments highlight the complex ethical and strategic considerations surrounding the use of AI in warfare. While his arguments may resonate with those prioritising technological superiority, they also underscore the need for ongoing debate and regulation to ensure the responsible development and deployment of AI in military applications.

