We present a system for the automatic generation of bottom-up visual attention behaviours in virtual humans. Bottom-up attention refers to the way in which the environment solicits one’s attention without regard to task-level goals. Our framework is based on the interactions of multiple components: a synthetic vision system for perceiving the virtual world, a model of bottom-up attention for early visual processing of perceived stimuli, a memory system for the storage of previously sensed data and a gaze controller for the generation of resultant behaviours. Our aim is to provide a feeling of presence in inhabited virtual environments by endowing agents with the ability to pay attention to their surroundings.