TY - GEN
T1 - Emergent Dominance Hierarchies in Reinforcement Learning Agents
AU - Rachum, Ram
AU - Nakar, Yonatan
AU - Tomlinson, Bill
AU - Alon, Nitay
AU - Mirsky, Reuth
N1 - Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025/1/1
Y1 - 2025/1/1
N2 - Modern Reinforcement Learning (RL) algorithms are able to outperform humans in a wide variety of tasks. Multi-agent reinforcement learning (MARL) settings present additional challenges, and successful cooperation in mixed-motive groups of agents depends on a delicate balancing act between individual and group objectives. Social conventions and norms, often inspired by human institutions, are used as tools for striking this balance. We examine a fundamental, well-studied social convention that underlies cooperation in animal and human societies: dominance hierarchies. We adapt the ethological theory of dominance hierarchies to artificial agents, borrowing the established terminology and definitions with as few amendments as possible. We demonstrate that populations of RL agents, operating without explicit programming or intrinsic rewards, can invent, learn, enforce, and transmit a dominance hierarchy to new populations. The dominance hierarchies that emerge have a similar structure to those studied in chickens, mice, fish, and other species.
AB - Modern Reinforcement Learning (RL) algorithms are able to outperform humans in a wide variety of tasks. Multi-agent reinforcement learning (MARL) settings present additional challenges, and successful cooperation in mixed-motive groups of agents depends on a delicate balancing act between individual and group objectives. Social conventions and norms, often inspired by human institutions, are used as tools for striking this balance. We examine a fundamental, well-studied social convention that underlies cooperation in animal and human societies: dominance hierarchies. We adapt the ethological theory of dominance hierarchies to artificial agents, borrowing the established terminology and definitions with as few amendments as possible. We demonstrate that populations of RL agents, operating without explicit programming or intrinsic rewards, can invent, learn, enforce, and transmit a dominance hierarchy to new populations. The dominance hierarchies that emerge have a similar structure to those studied in chickens, mice, fish, and other species.
KW - Cooperative AI
KW - Cultural Evolution
KW - Multi-Agent Reinforcement Learning
KW - Multi-Agent Systems
KW - Reinforcement Learning
UR - http://www.scopus.com/inward/record.url?scp=105000661353&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-82039-7_4
DO - 10.1007/978-3-031-82039-7_4
M3 - Conference contribution
SN - 9783031820380
T3 - Lecture Notes in Computer Science
SP - 41
EP - 56
BT - Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems XVII - International Workshop, COINE 2024, Revised Selected Papers
A2 - Cranefield, Stephen
A2 - Nardin, Luis Gustavo
A2 - Lloyd, Nathan
PB - Springer Science and Business Media Deutschland GmbH
T2 - 28th International Workshop on Coordination, Organizations, Institutions, Norms, and Ethics for Governance of Multi-Agent Systems, COINE 2024
Y2 - 7 May 2024 through 7 May 2024
ER -