Most pose (3D position and 3D orientation) tracking methods using vision require a priori knowledge about the environment and correspondences between 3D environment features and 2...
We present a general method for real-time, visiononly single-camera simultaneous localisation and mapping (SLAM) — an algorithm which is applicable to the localisation of any ca...
Andrew J. Davison, Walterio W. Mayol-Cuevas, David...
We present a demonstrated and commercially viable self-tracker, using robust software that fuses data from inertial and vision sensors. Compared to infrastructurebased trackers, s...
This paper describes new vision-based registration methods utilizing not only cameras on a user’s head-mounted display but also a bird’s-eye view camera that observes the user...
A wearable low-power hybrid vision-inertial tracker has been demonstrated based on a flexible sensor fusion core architecture, which allows easy reconfiguration by plugging-in dif...