We present a motion exemplar approach for finding body configuration in monocular videos. A motion correlation technique is employed to measure the motion similarity at various space-time locations between the input video and stored video templates. These observations are used to predict the conditional state distributions of exemplars and joint positions. Exemplar sequence selection and joint position estimation are then solved with approximate inference using Gibbs sampling and gradient ascent. The presented approach is able to find joint positions accurately for people with textured clothing. Results are presented on a dataset containing slow, fast and incline walk videos of various people from different view angles. The results demonstrate an overall improvement compared to previous methods.