In the rapidly evolving landscape of artificial intelligence, the dual-use potential of advanced technologies has become a focal point of concern, particularly within the defence and military sectors. While much of the discourse has centred on the hypothetical risks of AI enabling the development of chemical, biological, radiological, and nuclear (CBRN) weapons, a critical gap in the conversation has emerged. Researchers Heidy Khlaaf, Sarah Myers West, and Meredith Whittaker have highlighted a pressing issue: the current and immediate applications of AI in intelligence, surveillance, target acquisition, and reconnaissance (ISTAR) systems. These systems, which are already deployed in real-world scenarios, pose significant life-or-death consequences for civilians and carry substantial geopolitical risks.
The proliferation of commercial foundation models—AI systems designed for broad applicability—has introduced novel challenges and risks. These models, while powerful and versatile, can inadvertently contribute to military capabilities, including ISTAR. The researchers argue that the current policy debate, heavily focused on CBRN threats, has narrowed the scope of necessary discussions. This narrow focus has led to an overemphasis on measures like compute thresholds and model weight release, which do little to address the immediate risks associated with ISTAR applications.
One of the primary concerns raised by the researchers is the inability to prevent personally identifiable information (PII) from being integrated into commercial foundation models. This integration could lead to the misuse of AI technologies by adversaries, potentially resulting in the proliferation of military AI capabilities. The researchers underscore that the widespread availability of commercial models exacerbates these risks, as adversaries can more easily access and repurpose these technologies for malicious ends.
Furthermore, the integration of foundation models within military settings inherently expands the attack vectors of military systems and the defence infrastructures they interface with. This expansion increases the vulnerability of these systems to cyber threats and other forms of malicious exploitation. The researchers argue that to mitigate these risks, it may be necessary to insulate military AI systems and personal data from commercial foundation models. This insulation could help secure military systems and limit the proliferation of AI armaments.
The researchers’ work highlights the urgent need for a broader policy debate that encompasses the full spectrum of AI applications in the military sector. By focusing on the immediate risks posed by ISTAR systems, policymakers and researchers can develop more effective strategies to mitigate the proliferation of AI technologies and protect civilian populations from the potentially devastating consequences of AI misuse.
As the defence and security sectors continue to evolve, the insights provided by Khlaaf, Myers West, and Whittaker serve as a critical reminder of the need for vigilance and proactive policy-making. The integration of AI into military systems presents both opportunities and challenges, and it is imperative that these challenges are addressed comprehensively to ensure the responsible and ethical use of these powerful technologies. Read the original research paper here.

