We present a novel affective goal selection mechanism for decision-making in agents with limited computational resources (e.g., such as robots operating under real-time constraints). We argue that when deciding whether to undertake some action, affective states can serve as subjective estimates of the likelihood of that action succeeding. Given that the affective states may reflect, in part, the recent history of successes and failures for a given action type, their roles in action selection can be viewed as analogous to temporal probabilistic decision models such as Markov decision processes. We show how "affect-influenced decision making" can provide low-cost mechanisms to break out of potentially costly sequences of failed actions in the absence of either knowing or being able to compute the actual utility of performing a particular action.
Paul W. Schermerhorn, Matthias Scheutz