Massimo Passamonti, a researcher at the intersection of artificial intelligence and ethics, has published a thought-provoking study that challenges the notion of machines as moral agents. His work, titled “Why Machines Can’t Be Moral: Turing’s Halting Problem and the Moral Limits of Artificial Intelligence,” delves into the computational and philosophical barriers that prevent artificial intelligence from replicating human-like moral reasoning.
Passamonti’s research hinges on Alan Turing’s theory of computation, particularly the halting problem, to demonstrate why explicit ethical machines—those designed with moral principles inferred through a bottom-up approach—cannot be considered moral agents. The halting problem, a fundamental concept in computer science, states that it is impossible to predict with certainty whether a computational process will halt or continue indefinitely. This uncertainty, Passamonti argues, extends to moral reasoning, making it computationally intractable for machines.
To illustrate this, Passamonti formalizes moral problems into what he terms “algorithmic moral questions.” These are ethical dilemmas that machines might encounter, such as those faced by autonomous military drones. In a compelling thought experiment, he presents a scenario where a drone, programmed with moral principles, must decide between two actions, each with ethical implications. The drone, constrained by the halting problem, might fail to reach a decision, highlighting the limitations of artificial agents in moral reasoning.
Passamonti also explores the dual-process model of moral psychology, which posits that human moral reasoning involves both intuitive and deliberative processes. Machines, however, lack the intuitive, emotional, and contextual understanding that humans possess. This gap underscores why machines cannot replicate human-like moral reasoning, even if they can engage in recursive moral deliberation.
The implications of Passamonti’s research are significant for the defence and security sector, where autonomous systems are increasingly deployed. If machines cannot be moral agents, then the ethical responsibility for their actions ultimately rests with human operators and designers. This raises critical questions about accountability, transparency, and the ethical frameworks guiding the development and deployment of autonomous weapons and other defence technologies.
Passamonti’s work serves as a timely reminder that while AI can assist in decision-making, it cannot replace human judgment, especially in matters of life and death. As the defence sector continues to integrate AI and autonomous systems, his research underscores the need for robust ethical guidelines and human oversight to ensure that these technologies are used responsibly and ethically. Read the original research paper here.

