Unmanned Aerial Vehicles (UAVs) are playing an increasing role in gathering information about objects on the ground. In particular, a key problem is to detect and classify objects from a sequence of camera images. However, existing systems typically adopt an idealised model of sensor observations, by assuming they are independent, and take the form of maximum likelihood predictions of an object’s class. In contrast, real vision systems produce output that can be highly correlated and corrupted by noise. Therefore, traditional approaches can lead to inaccurate or overconfident results, which in turn lead to poor decisions about what to observe next to improve these predictions. To address these issues, we develop a Gaussian Process based observation model that characterises the correlation between classifier outputs as a function of UAV position. We then use this to fuse classifier observations from a sequence of images and to plan the UAV’s movements. In both real and simulated...
W. T. Luke Teacy, Simon J. Julier, Renzo De Nardi,