Sciweavers

1050 search results - page 125 / 210
» Wearable Visual Robots
Sort
View
AAAI
2004
13 years 11 months ago
Self-Organizing Visual Maps
This paper deals with automatically learning the spatial distribution of a set of images. That is, given a sequence of images acquired from well-separated locations, how can they ...
Robert Sim, Gregory Dudek
IJRR
2011
210views more  IJRR 2011»
13 years 5 months ago
Visual-Inertial Sensor Fusion: Localization, Mapping and Sensor-to-Sensor Self-calibration
Visual and inertial sensors, in combination, are able to provide accurate motion estimates and are well-suited for use in many robot navigation tasks. However, correct data fusion...
Jonathan Kelly, Gaurav S. Sukhatme
ICPR
2002
IEEE
14 years 11 months ago
Fusion of Range and Visual Data for the Extraction of Scene Structure Information
In this paper, a method for inferring 3D structure information based on both range and visual data is proposed. Data fusion is achieved by validating assumptions formed according ...
Haris Baltzakis, Antonis A. Argyros, Panos E. Trah...
ICPR
2010
IEEE
13 years 8 months ago
Visual Recognition of Types of Structural Corridor Landmarks Using Vanishing Points Detection and Hidden Markov Models
In this paper, to provide a robot with information relative to structure of its environment, we propose a method to recognize types of structural corridor landmarks such as T-junct...
Youngbin Park, Sung Su Kim, Il Hong Suh
IROS
2008
IEEE
187views Robotics» more  IROS 2008»
14 years 4 months ago
Monocular visual odometry in urban environments using an omnidirectional camera
— We present a system for Monocular Simultaneous Localization and Mapping (Mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the cam...
Jean-Philippe Tardif, Yanis Pavlidis, Kostas Danii...