— Autonomous social robots embedded in human societies have to be sensitive to human social interactions and thus to moral norms and principles guiding these interactions. Actions that violate norms can lead to the violator being blamed. Robots thus need to be able to anticipate possible norm violations and attempt to prevent them while they execute actions. If norm violations cannot be prevented (e.g., in a moral dilemma situation in which every action leads to a norm violation), then the robot needs to be able to justify the action to address any potential blame. In this paper, we present a first attempt at an action execution system for social robots that can (a) detect (some) norm violations, (b) consult an ethical reasoner for guidance on what to do in moral dilemma situations, and (c) it can keep track of execution traces and any resulting states that might have violated norms in order to produce justifications.
Matthias Scheutz, Bertram F. Malle, Gordon Briggs