Sciweavers

TITS
2010

A Sensor Fusion Framework Using Multiple Particle Filters for Video-Based Navigation

13 years 6 months ago
A Sensor Fusion Framework Using Multiple Particle Filters for Video-Based Navigation
This paper presents a sensor-fusion framework for video-based navigation. Video-based navigation offers the advantages over existing approaches. With this type of navigation, road signs are directly superimposed onto the video of the road scene, as opposed to those superimposed onto a 2-D map, as is the case with conventional navigation systems. Drivers can then follow the virtual signs in the video to travel to the destination. The challenges of video-based navigation require the use of multiple sensors. The sensor-fusion framework that we propose has two major components: 1) a computer vision module for accurately detecting and tracking the road by using partition sampling and auxiliary variables and 2) a sensor-fusion module using multiple particle filters to integrate vision, Global Positioning Systems (GPSs), and Geographical Information Systems (GISs). GPS and GIS provide prior knowledge about the road for the vision module, and the vision module, in turn, corrects GPS errors.
Li Bai, Yan Wang
Added 22 May 2011
Updated 22 May 2011
Type Journal
Year 2010
Where TITS
Authors Li Bai, Yan Wang
Comments (0)