To create a robot with a mind of its own, we extended a formalized version of a model that explains affect-driven interaction with mechanisms for goaldirected behavior. We ran simulation experiments with intelligent software agents and found that agents preferred affect-driven decision options to rational decision options in situations where choices for low expected utility are irrational. This behavior counters current models in decision making, which generally have a hedonic bias and always select the option with the highest expected utility.
Johan F. Hoorn, Matthijs Pontier, Ghazanfar F. Sid