UC Berkeley Warns of AI Arms Race in Autonomous Weapons

Researchers from the University of California, Berkeley, have sounded an urgent alarm about the geopolitical risks posed by the rapid advancement of machine learning (ML) in autonomous weapons systems (AWS). In a recent paper, Riley Simmons-Edler, Ryan Badman, Shayne Longpre, and Kanaka Rajan argue that the integration of ML into AWS could destabilise global security and stifle AI research, even in the absence of superintelligent artificial general intelligence (AGI).

The team highlights a critical yet often overlooked issue: ML is already enabling the substitution of AWS for human soldiers in various battlefield roles. This shift reduces the upfront human cost—and, consequently, the political cost—of waging offensive war. For peer adversaries, this lowers the threshold for engaging in “low-intensity” conflicts, which could escalate into broader warfare. Meanwhile, for non-peer adversaries, it diminishes domestic backlash against wars of aggression. These risks emerge regardless of other ethical concerns, such as the potential for civilian casualties, and do not require AWS to possess superhuman capabilities.

The researchers warn that the military value of AWS could trigger an AI-powered arms race, further destabilising global security. Additionally, the perceived strategic importance of AWS might lead to misguided national security restrictions on AI research, hindering innovation and collaboration in the field. The paper emphasises that these risks are not speculative but are already unfolding, making them a pressing concern for policymakers, researchers, and the public.

To mitigate these risks, the researchers call for greater transparency and caution in the development and deployment of AWS. They urge AI policy experts and the defence AI community to adopt a more measured approach, ensuring that technological advancements do not come at the expense of global stability and the free exchange of ideas. The paper serves as a wake-up call, demanding immediate attention to the near-future dangers posed by autonomous weapons and the need for proactive regulatory measures. Read the original research paper here.

Scroll to Top
×