In the realm of artificial intelligence, the quest to understand how machines make decisions under uncertainty has taken a significant leap forward. Kenneth Payne, a researcher at King’s College London, has conducted groundbreaking experiments that apply prospect theory—the seminal work of psychologists Daniel Kahneman and Amos Tversky—to large language models (LLMs). His findings, published in a recent study, reveal that these advanced AI systems exhibit decision-making patterns eerily similar to human behaviour, particularly when faced with risky scenarios.
Prospect theory, which won Kahneman and Tversky the Nobel Prize in Economic Sciences, posits that humans evaluate potential losses and gains differently. Specifically, people tend to take more risks to avoid losses than to secure gains. Payne’s research extends this theory into the digital realm, testing whether state-of-the-art LLMs, including chain-of-thought reasoners, follow the same principles. The results are striking: these AI models often mirror human behaviour, accepting greater risks in loss-framed scenarios compared to gain-framed ones.
One of the most intriguing aspects of Payne’s study is the role of context in shaping AI decision-making. He found that the “frame” through which risk is presented significantly influences the models’ risk appetite. For instance, military scenarios generate much larger framing effects than civilian ones. This suggests that the language used to describe a situation embeds a form of bias, activating different heuristics and biases within the AI. Payne draws on Ludwig Wittgenstein’s concept of “language games” to explain this phenomenon, arguing that the biases in AI decision-making are contingent and localised, varying depending on the context.
The implications of these findings are profound. They suggest that LLMs do not just process information mechanically; they also capture and replicate human heuristics and biases. This raises important questions about the ethical and practical use of AI in high-stakes decision-making environments, such as military strategy or financial trading. If AI systems inherit our biases, how can we ensure they make fair and rational decisions?
Payne’s research also contributes to the ongoing debate about reasoning versus memorisation in LLMs. By demonstrating that these models exhibit prospect theory-like behaviour, he challenges the notion that they merely regurgitate learned data. Instead, they appear to engage in a form of reasoning that aligns with human cognitive processes. This insight could pave the way for developing more transparent and controllable AI systems, capable of making decisions that are both rational and ethically sound.
In conclusion, Kenneth Payne’s work offers a compelling glimpse into the mind of AI, revealing that these systems are not just sophisticated calculators but entities that navigate risk in ways that echo human behaviour. As AI continues to evolve, understanding and mitigating these biases will be crucial in ensuring that these powerful tools are used responsibly and effectively. Read the original research paper here.

