Detection of motion patterns in video data can be significantly simplified by abstracting away from pixel
intensity values towards representations that explicitly and compactly capture movement across space
and time. A novel representation that captures the spatiotemporal distributions of motion across regions
of interest, called the ‘‘Direction Map,” abstracts video data by assigning a two-dimensional vector, representative
of local direction of motion, to quantized regions in space-time. Methods are presented for
recovering direction maps from video, constructing direction map templates (defining target motion patterns
of interest) and comparing templates to newly acquired video (for pattern detection and localization).
These methods have been successfully implemented and tested (with real-time considerations) on
over 6300 frames across seven surveillance/traffic videos, detecting potential targets of interest as they
traverse the scene in specific ways. Results show a...
Jacob M. Gryn, Richard P. Wildes, John K. Tsotsos