Luckey’s TED2025 Vision: AI-Driven Deterrence Against Chinese Invasion

Palmer Luckey, the founder of Anduril Industries, took the stage at TED2025 on April 8, 2025, to deliver a provocative talk on the military use of artificial intelligence (AI). His presentation painted a stark picture of a full-scale Chinese surprise attack on Taiwan, where ballistic missiles, amphibious assault ships, and cyber-attacks overwhelm the island’s defences. In this fictional scenario, the US struggles to respond effectively due to a lack of numerical capabilities. Luckey then presented an alternative vision, one where Anduril’s AI-driven systems, coordinated through its Lattice platform, fend off the invasion. He argued that deploying autonomous systems at scale could reclaim deterrence by proving the capacity to win.

This vision is not merely rhetorical; it reflects a broader trend in the defence industry. As researchers Dr. Robin Vanderborght of the University of Antwerp and Dr. Anna Nadibaidze of the University of Southern Denmark highlight, defence tech companies like Anduril and Palantir use virtual demonstrations to portray themselves as authorities on the future of war. These demonstrations, often featuring sleek visuals and expert commentary, construct a utopian vision of war where AI provides omniscient comprehension and efficient decision-making.

Defence tech companies regularly produce and circulate these virtual military demonstrations, portraying algorithmic warfare as a strategic imperative. The visual narratives suggest that future wars will be won through quick, precise military decision-making, enabled by AI-driven systems. Palmer Luckey, in a recent op-ed, claimed that his visit to Ukraine post-Russia’s full-scale invasion revealed how militaries will win future wars by applying new technologies in large numbers. This vision promotes a fantasy of omniscience and omnipotence, where extensive data analysis eliminates the fog of war.

The integration of AI in military operations is framed not just as a strategic necessity but also as a moral imperative. Luckey’s TED talk underscored this point, suggesting that if the US does not lead in military AI, authoritarian regimes will, unconstrained by ethical norms. Alexander Karp, CEO of Palantir, similarly positioned his company as peace activists, asserting that their work on military AI contributes to deterrence and stability.

However, these virtual demonstrations misrepresent the complexities of warfare. They promote a vision of sanitised, precise, and bloodless violence, excluding the realities of civilian casualties and destruction. The portrayal of AI-enabled warfare as clean and efficient overlooks the messy, unpredictable nature of conflict. As Elke Schwarz argues, those who claim secret knowledge about humanity’s inevitable future wield substantial political power and influence.

The increasing efforts of defence tech companies to position themselves as authorities on the future of war have significant political, legal, ethical, and security implications. These demonstrations seek to convince decision-makers to invest in military blitzscaling strategies, prioritising speed and experimentation over safety and usefulness. However, evidence from the field shows that such technologies often fall short of expectations, leading to critical security flaws and personnel frustration.

As the influence of tech companies in defence grows, both in the US and Europe, it is crucial to explore and debate the political, legal, ethical, and societal implications of their visions. The utopian narratives of AI in war, while compelling, must be critically examined to ensure that the realities of conflict are not oversimplified or misrepresented.

Scroll to Top
×