AI Researchers Divided on Ethics and Governance

A recent survey of machine learning (ML) and artificial intelligence (AI) researchers has shed light on the ethical and governance attitudes of this influential group, revealing both consensus and division on critical issues. Conducted by Baobao Zhang, Markus Anderljung, Lauren Kahn, Noemi Dreksler, Michael C. Horowitz, and Allan Dafoe, the study polled 524 researchers who published in top AI/ML conferences, offering a snapshot of their perspectives on the development and regulation of AI technologies.

The survey highlights a significant trust disparity among AI/ML researchers. They exhibit high levels of trust in international and scientific organizations to steer AI development in the public interest. However, their confidence wanes when it comes to tech companies, particularly Chinese firms and Facebook, and plummets regarding national militaries. This trust hierarchy underscores the researchers’ preference for institutions perceived as neutral and focused on collective benefit over those driven by commercial or nationalistic agendas.

One of the most striking findings is the overwhelming opposition among AI/ML researchers to the development of lethal autonomous weapons. This consensus reflects a strong ethical stance against AI applications that could lead to loss of life without human intervention. However, researchers are less unified when it comes to other military applications of AI, such as logistics algorithms, indicating a more nuanced view of AI’s role in defence.

The survey also reveals a strong consensus on the importance of AI safety research. A significant majority of respondents believe that this area should be prioritized to prevent potential misuse and harm. Additionally, there is substantial support for pre-publication reviews to assess the potential risks associated with new AI technologies. This suggests that AI/ML researchers are not only concerned with advancing the field but also with ensuring that progress is made responsibly and safely.

Comparing these findings with previous surveys, such as the 2016 survey of AI/ML researchers and the 2018 survey of the US public, provides valuable context. It shows that while some attitudes have remained consistent, others have evolved, reflecting the dynamic nature of the AI landscape and the ongoing debates within the research community.

The implications of this research are far-reaching. For policymakers, the findings offer a roadmap for developing governance frameworks and regulations that align with the ethical concerns of AI/ML researchers. For private sector executives, it highlights the importance of building trust through ethical practices and transparency. For the researchers themselves, it underscores their pivotal role in shaping the future of AI, not just through technological innovation but also through active participation in ethical and governance discussions.

As AI continues to permeate various aspects of society, the insights from this survey are invaluable. They provide a foundation for fostering a collaborative approach to AI governance, one that balances innovation with ethical considerations and public interest. By understanding the attitudes of AI/ML researchers, stakeholders can work towards creating a future where AI technologies are developed and deployed responsibly, benefiting society as a whole. Read the original research paper here.

Scroll to Top
×