In a groundbreaking study, researchers have demonstrated that AI-powered influence operations can now be executed seamlessly on standard, widely available hardware. This development signals a significant shift in the landscape of digital propaganda, making it accessible not only to well-funded entities but also to smaller actors. The research, led by Lukasz Olejnik, underscores the potential for small language models to generate coherent, persona-driven political messaging, which can be automatically evaluated without human intervention.
The study reveals two key behavioural findings that could reshape our understanding of AI-driven influence campaigns. First, the research highlights that persona design plays a more critical role in shaping behaviour than the identity of the language model itself. This suggests that the way an AI persona is crafted—its tone, ideological stance, and engagement style—can significantly impact its effectiveness in influencing public opinion. Second, the study finds that when AI-generated content must counter opposing arguments, ideological adherence strengthens, and the prevalence of extreme content increases. This indicates that AI models may amplify polarisation and radicalisation in online discourse, particularly in adversarial environments.
One of the most alarming implications of this research is that fully automated influence campaigns are now within reach of both large and small actors. This democratisation of influence operations could lead to a proliferation of AI-driven propaganda, making it increasingly difficult to discern credible information from manipulated content. The researchers argue that defence strategies must evolve to address this new reality. Rather than focusing solely on restricting access to advanced AI models, efforts should shift towards detecting and disrupting AI-driven influence campaigns at the conversational level. This includes identifying and dismantling the coordination infrastructure that supports these operations.
Paradoxically, the very consistency that enables AI-driven influence operations also provides a potential detection signature. The uniformity in messaging and behavioural patterns of AI-generated content could be exploited to develop sophisticated detection algorithms. By analysing the consistency and repetition in AI-generated propaganda, researchers and cybersecurity experts may be able to identify and mitigate these campaigns more effectively.
The study’s findings underscore the urgent need for robust defence mechanisms against AI-powered influence operations. As AI technology continues to advance, the potential for misuse in shaping public opinion and undermining democratic processes grows. The research calls for a proactive approach to countering AI-driven propaganda, emphasising the importance of developing advanced detection techniques and disrupting the infrastructure that supports these operations. By doing so, we can safeguard the integrity of public discourse and protect democratic institutions from the destabilising effects of AI-generated influence campaigns. Read the original research paper here.

