Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper we demonstrate how such fe...
We propose a calibration-free gaze sensing method using visual saliency maps. Our goal is to construct a gaze estimator only using eye images captured from a person watching a vid...
We propose a space-time Markov Random Field (MRF)
model to detect abnormal activities in video. The nodes in
the MRF graph correspond to a grid of local regions in the
video fra...
Jaechul Kim (University of Texas at Austin), Krist...
This paper presents a framework for speech-driven synthesis of real faces from a corpus of 3D video of a person speaking. Video-rate capture of dynamic 3D face shape and colour ap...
Ioannis A. Ypsilos, Adrian Hilton, Aseel Turkmani,...
The visual effects of rain are complex. Rain consists of spatially distributed drops falling at high velocities. Each drop refracts and reflects the environment, producing sharp i...