To navigate reliably in indoor environments, a mobile robot must know where it is. This includes both the ability of globally localizing the robot from scratch, as well as tracking the robot's position once its location is known. Vision has long been advertised as providing a solution to these problems, but we still lack efficient solutions in unmodified environments. Many existing approaches require modification of the environment to function properly, and those that work within unmodified environments seldomly address the problem of global localization. In this paper we present a novel, vision-based localization method based on the CONDENSATION algorithm [17, 18], a Bayesian filtering method that uses a samplingbased density representation. We show how the CONDENSATION algorithm can be used in a novel way to track the positionof the camera platform rather than tracking an object in the scene. In addition,it can also be used to globally localize the camera platform, given a visu...