In this paper we present a template-based method of motion estimation which tracks two-handed-gestures. In our method the gesturing information of temporal motion and spatial luminance is fully utilized. The dominant motion of the detected region corresponding to the tracked object (some hand or the head) is computed. Using this result the object template is warped to yield a prediction template. Incorporated with static segmentation using the watershed algorithm the warped templatewillbeupdatedthroughcomparisonofeachsub-region with the prediction template. Tracking results for a set of two-handed command gestures are given to demonstrate its performance.
Yu Huang, Thomas S. Huang, Heinrich Niemann