Traditional goal-oriented approaches to building intelligent agents only consider absolute satisfaction of goals. However, in continuous domains there may be many instances in which a goal state can only be partially satisfied. In these situations the traditional symbolic goal representation needs modifying in order that an agent can determine a worth value of a goal state and also of any state approximating the goal. In our work we use the concept of worth in two ways. First, we propose a mechanism by which the worth of a goal is dynamically set as a function of the intensity of an underlying motivation. Second, we determine the worth of any state in relation to a goal through the use of a metric by which we can measure the proximity of an environmental state to a goal. In this way, it is possible to make judgements about the relative satisfaction an environmental state offers in regard to a goal. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Arti...
Stephen J. Munroe, Michael Luck, Mark d'Inverno