Historically non-rigid shape recovery and articulated pose estimation have evolved as separate fields. Recent methods for non-rigid shape recovery have focused on improving the algorithmic formulation, but have only considered the case of reconstruction from point-to-point correspondences. In contrast, many techniques for pose estimation have followed a discriminative approach, which allows for the use of more general image cues. However, these techniques typically require large training sets and suffer from the fact that standard discriminative methods do not enforce constraints between output dimensions. In this paper, we combine ideas from both domains and propose a unified framework for articulated pose estimation and 3D surface reconstruction. We address some of the issues of discriminative methods by explicitly constraining their prediction. Furthermore, our formulation allows for the combination of generative and discriminative methods into a single, common framework.