AI Safety Risks: Military Adoption Threatens National Security

In a groundbreaking study, researchers Heidy Khlaaf and Sarah Myers West delve into the critical issue of safety co-option and compromised national security in the realm of artificial intelligence (AI). Their paper, “Safety Co-Option and Compromised National Security: The Self-Fulfilling Prophecy of Weakened AI Risk Thresholds,” explores how the absence of agreed-upon risk thresholds for AI systems is leading to a dangerous race to the bottom in military AI adoption.

The study begins by examining the historical context of risk thresholds, which were first established during the Cold War for nuclear systems. These thresholds have since shaped the safety standards for various technological systems. However, the researchers argue that the appropriate risk tolerances for AI systems remain undetermined, despite the urgent need for democratic deliberation on acceptable levels of harm to human life.

Khlaaf and Myers West highlight how AI technologists, primarily industry labs and “AI safety” focused organizations, have taken the lead in advocating for risk tolerances. This shift has significant implications, as it subverts democratic processes and places life-or-death decisions in the hands of a select few. The researchers refer to this phenomenon as “safety revisionism,” where traditional safety methods and terminology are replaced with ill-defined alternatives.

One of the most alarming findings of the study is the acceleration of military AI adoption at the cost of lowered safety and security thresholds. The researchers argue that this trajectory is poised to compromise the national security interests of the United States. They emphasize the need for safety-critical and defense systems to comply with assurance frameworks aligned with established risk thresholds, including foundation models.

The paper underscores the importance of developing evaluation frameworks for AI-based military systems that prioritize the safety and security of critical and defense infrastructure. It also calls for alignment with international humanitarian law to ensure that AI systems are used responsibly and ethically.

Khlaaf and Myers West’s research serves as a stark reminder of the potential consequences of unchecked AI development and the urgent need for robust, democratic governance in this field. Their findings have significant implications for policymakers, technologists, and the public, urging a collective effort to establish clear risk thresholds and ensure the safe and secure use of AI in military applications. Read the original research paper here.

Scroll to Top
×