Sciweavers

CVPR
2007
IEEE

Leveraging temporal, contextual and ordering constraints for recognizing complex activities in video

15 years 2 months ago
Leveraging temporal, contextual and ordering constraints for recognizing complex activities in video
We present a scalable approach to recognizing and describing complex activities in video sequences. We are interested in long-term, sequential activities that may have several parallel streams of action. Our approach integrates temporal, contextual and ordering constraints with output from low-level visual detectors to recognize complex, long-term activities. We argue that a hierarchical, objectoriented design lends our solution to be scalable in that higher-level reasoning components are independent from the particular low-level detector implementation and that recognition of additional activities and actions can easily be added. Three major components to realize this design are: a dynamic Bayesian network structure for representing activities comprised of partially ordered sub-actions, an object-oriented action hierarchy for building arbitrarily complex action detectors and an approximate Viterbi-like algorithm for inferring the most likely observed sequence of actions. Additionally...
Benjamin Laxton, Jongwoo Lim, David J. Kriegman
Added 12 Oct 2009
Updated 28 Oct 2009
Type Conference
Year 2007
Where CVPR
Authors Benjamin Laxton, Jongwoo Lim, David J. Kriegman
Comments (0)