Most of the methods for human motion tracking are based on the modeling of human dynamics in action execution. In even small example space of human activities, the variation in action execution requires us to model a large number uncertainties. This paper proposes a novel approach for motion tracking that avoids the tedious work of modeling human kinematics. This approach is based on the anthropometric and multi-view geometric constraints, successfully employed in the action recognition frame work. The performance of this method is demonstrated on several different human actions.