The primary objective of this research work is to develop an efficient and intuitive deformation technique for virtual human modeling by silhouettes input. With our method, the reference silhouettes (the front-view and right-view silhouettes of the synthetic human model) and the target silhouettes (the front-view and right-view silhouettes of the human from the photographs) are used to modify the synthetic human model, which is represented by a polygonal mesh. The system moves the vertices of the polygonal model so that the spatial relation between the original positions and the reference silhouettes is identical to the relation between the resulting positions and the target silhouettes. Our method is related to the axial deformation. The self-intersection problem is solved. KEY WORDS: Deformation, View-dependent, Virtual human, 3-D object extraction.
Charlie C. L. Wang, Yu Wang 0010, Matthew Ming-Fai