In the rapidly evolving landscape of defence and security, the integration of autonomous agents into human teams is becoming increasingly prevalent. This shift is driven by the potential for robots and autonomous systems to enhance capabilities and reduce risks for human operatives. However, a significant challenge to this transition is the establishment of sufficient trust between human team members and their autonomous counterparts. A recent study by researchers Chris Baber, Patrick Waterson, Sanja Milivojevic, Sally Maynard, Edmund R. Hunt, and Sagir Yusuf explores how a dynamic allocation of function (AoF) can facilitate this trust, introducing the concept of a ‘ladder of trust’ to guide the process.
The study posits that trust within human-autonomous agent collectives is not static but evolves over time based on the performance and reliability of each team member. The researchers propose a ‘ladder of trust’ framework, where individual team members adjust their trust levels in their teammates according to a ‘score’ derived from the ability to perceive, understand, and act effectively in achieving team or self-goals. This score is a dynamic metric that reflects the team member’s competence and reliability, providing a system-level perspective on how functions should be allocated during a mission.
The dynamic allocation of function is a critical factor in building and maintaining trust. The most suitable teammate for a particular function might not always be the one with the highest trust rating. Instead, the next most suitable teammate, who is also likely to perform within the set moral, ethical, and legal constraints, might be a better choice. This approach ensures that the allocation of functions is not only based on capability but also on the broader context of integrity and accountability.
The researchers emphasize that the allocation space is defined by more than just the ability of each agent to perform a function. It also includes considerations of trust, predictability, and the broader ethical and legal frameworks within which the agents operate. This holistic approach ensures that the allocation of functions is both effective and responsible, aligning with the values and principles of the team.
The study highlights the importance of a nuanced understanding of trust in the context of human-autonomous agent collectives. By incorporating a ‘ladder of trust’ into the dynamic allocation of function, teams can better navigate the complexities of integrating autonomous agents into their operations. This approach not only enhances the overall effectiveness of the team but also fosters a culture of trust and collaboration, which is essential for successful mission outcomes.
As the defence and security sectors continue to evolve, the insights from this research provide a valuable framework for developing and deploying autonomous systems. By focusing on the dynamic nature of trust and the allocation of functions, teams can better leverage the capabilities of autonomous agents while ensuring that their operations remain aligned with ethical and legal standards. This balanced approach is crucial for the successful integration of autonomous agents into human teams, paving the way for more effective and trustworthy defence and security operations. Read the original research paper here.

