Sciweavers

ISVC
2010
Springer

Egocentric Visual Event Classification with Location-Based Priors

13 years 10 months ago
Egocentric Visual Event Classification with Location-Based Priors
We present a method for visual classification of actions and events captured from an egocentric point of view. The method tackles the challenge of a moving camera by creating deformable graph models for classification of actions. Action models are learned from low resolution, roughly stabilized difference images acquired using a single monocular camera. In parallel, raw images from the camera are used to estimate the user's location using a visual Simultaneous Localization and Mapping (SLAM) system. Action-location priors, learned using a labeled set of locations, further aid action classification and bring events into context. We present results on a dataset collected within a cluttered environment, consisting of routine manipulations performed on objects without tags. 1
Sudeep Sundaram, Walterio W. Mayol-Cuevas
Added 13 Feb 2011
Updated 13 Feb 2011
Type Journal
Year 2010
Where ISVC
Authors Sudeep Sundaram, Walterio W. Mayol-Cuevas
Comments (0)