We present a new algorithm to compute the head motion between two views from the correspondences of five feature points (eye corners, mouth corners, and nose top), and zero or more other image point matches. The algorithm takes advantage of the physical properties of the feature points such as symmetry to significantly improve the robustness of head motion estimation. This idea can be easily extended to any number of feature point correspondences.