Sciweavers

RAS
2008

Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

13 years 11 months ago
Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping
This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground. Key words: semantic mapping, semi-supervised learning, aerial image, mobile robot
Martin Persson, Tom Duckett, Achim J. Lilienthal
Added 28 Dec 2010
Updated 28 Dec 2010
Type Journal
Year 2008
Where RAS
Authors Martin Persson, Tom Duckett, Achim J. Lilienthal
Comments (0)