As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making and social interactions. Existing theoretical research on the emergence and stability of cooperation, particularly in the context of social dilemmas, has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. Resorting to methods from evolutionary game theory, we study how different forms of AI can influence cooperation in a population of human-like agents playing the one-shot Prisoner's dilemma game. We found that Samaritan AI agents who help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only helps those considered worthy/cooperative, especially in slow-moving societies where change based on payoff difference is moderate (small intensities of selection). Only in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs. Furthermore, when it is possible to identify whether a co-player is a human or an AI, we found that cooperation is enhanced when human-like agents disregard AI performance. Our findings provide novel insights into the design and implementation of context-dependent AI systems for addressing social dilemmas.
Emergence of cooperation in the one-shot Prisoner’s dilemma through Discriminatory and Samaritan AIs
Zimmaro, Filippo;
2024-01-01
Abstract
As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making and social interactions. Existing theoretical research on the emergence and stability of cooperation, particularly in the context of social dilemmas, has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. Resorting to methods from evolutionary game theory, we study how different forms of AI can influence cooperation in a population of human-like agents playing the one-shot Prisoner's dilemma game. We found that Samaritan AI agents who help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only helps those considered worthy/cooperative, especially in slow-moving societies where change based on payoff difference is moderate (small intensities of selection). Only in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs. Furthermore, when it is possible to identify whether a co-player is a human or an AI, we found that cooperation is enhanced when human-like agents disregard AI performance. Our findings provide novel insights into the design and implementation of context-dependent AI systems for addressing social dilemmas.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.