We consider a resource access control scenario in an open multi-agent system. We specify a mutable set of rules to determine how resource allocation is decided, and minimally assume agent behaviour with respect to these rules is either selfish or responsible. We then study how a combination of learning, reputation, and voting can be used, in the absence of any centralised enforcement mechanism, to ensure that it is more preferable to behave responsibly than selfishly. This result indicates how it is possible to leverage local adaptation with respect to a set of rules to achieve an intended `global' system property.
Hugo Carr, Jeremy V. Pitt, Alexander Artikis