Sciweavers

ATAL
2008
Springer

Artificial agents learning human fairness

14 years 2 months ago
Artificial agents learning human fairness
Recent advances in technology allow multi-agent systems to be deployed in cooperation with or as a service for humans. Typically, those systems are designed assuming individually rational agents, according to the principles of classical game theory. However, research in the field of behavioral economics has shown that humans are not purely self-interested: they strongly care about fairness. Therefore, multi-agent systems that fail to take fairness into account, may not be sufficiently aligned with human expectations and may not reach intended goals. In this paper, we present a computational model for achieving fairness in adaptive multi-agent systems. The model uses a combination of Continuous Action Learning Automata and the Homo Egualis utility function. The novel contribution of our work is that this function is used in an explicit, computational manner. We show that results obtained by agents using this model are compatible with experimental and analytical results on human fairnes...
Steven de Jong, Karl Tuyls, Katja Verbeeck
Added 12 Oct 2010
Updated 12 Oct 2010
Type Conference
Year 2008
Where ATAL
Authors Steven de Jong, Karl Tuyls, Katja Verbeeck
Comments (0)