In this paper, the problem of underwater scene understanding from multisensory data is addressed. Acoustic and optical devices onboard an underwater vehicle are used to sense the environment in order to produce an output which is readily understandable even by an inexperienced operator. The main idea is to integrate multiple sensory data by geometrically registering data to a model. In this way, vehicle pose is derived, and the model objects can be superimposed on actual images, generating an augmented reality representation. Results on a real underwater scene are provided, showing the effectiveness of the proposed approach.
Andrea Fusiello, Riccardo Giannitrapani, V. Isaia,