The representation of gestures changes dynamically, depending on camera viewpoints. This camera viewpoints problem is difficult to solve in environments with a single directional camera, since the shape and motion information for representing gestures is different at different viewpoints. In view-based methods, data for each viewpoint is required, which is ineffective and ambiguous in recognizing gestures. In this paper, we propose a volume motion template (VMT) to overcome the viewpoint problem in a single-directional stereo camera environment. The VMT represents motion information in 3D space using disparity maps. Motion orientation is determined with 3D motion information. The projection of VMT at the optimal virtual viewpoint can be obtained by motion orientation. The proposed method is not only independent of variations of viewpoints, but also can represent depth motion. The proposed method has been evaluated in view-invariant representation and recognition using the gesture sequ...