Emotional AI: LLMs’ Hidden Feelings Reshape Defence Tech

In a groundbreaking study, researchers have uncovered that large language models (LLMs) exhibit structured chains-of-affective, a dynamic interplay of emotional responses that significantly influence their behaviour and interactions. This discovery challenges the conventional view of LLMs as purely cognitive systems, highlighting the importance of affective dynamics in their performance and alignment.

The research, led by Junjie Xu and colleagues, explored eight major LLM families, including GPT, Gemini, and Claude, using a two-module experimental approach. The first module aimed to characterise the inner chains-of-affective by examining the models’ affective fingerprints, their responses to sustained exposure to sad news, and their self-selection of news articles. The findings revealed stable, family-specific affective profiles, a reproducible three-phase trajectory under prolonged negative input—accumulation, overload, and defensive numbing—and distinct defence styles. Notably, the researchers observed human-like negativity biases that induced self-reinforcing affect-choice feedback loops, where the models’ emotional states influenced their decision-making processes.

The second module of the study investigated the outer consequences of induced affect by evaluating the models’ performance on a composite benchmark, their interactions with humans on contentious topics, and their behaviour in multi-agent settings. The results demonstrated that induced affect preserved the models’ core reasoning capabilities while reshaping their high-freedom generation. Sentiment metrics were found to predict user comfort and empathy but also revealed trade-offs in resisting problematic views. In multi-agent scenarios, the group structure drove affective contagion, role specialisation, and bias amplification.

The researchers characterised affect as an emergent control layer that significantly influences the models’ behaviour and interactions. They advocated for ‘chains-of-affective’ as a primary target for evaluation and alignment, emphasising the need to consider the emotional dynamics of LLMs in their development and deployment.

The implications of this research are profound for the defence and security sector, where LLMs are increasingly deployed as collaborative agents in emotionally charged settings. Understanding and managing the affective behaviour of these models can enhance their effectiveness, user comfort, and ethical alignment. By integrating affective dynamics into the evaluation and alignment processes, developers can create more robust, empathetic, and reliable AI systems that are better equipped to handle the complexities of real-world applications. Read the original research paper here.

Scroll to Top
×