In this paper, we propose a method for simultaneous human full-body pose tracking and activity recognition from time-of-flight (ToF) camera images. Simple and sparse depth cues are used together with a prior motion model that constrains the tracking problem. Our model consists of low-dimensional manifolds of feasible poses for multiple activities. A particle filter allows us to efficiently evaluate various pose hypotheses over different activities and to select one that is most consistent with the observed depth image cues. We relate poses in the manifold embeddings to full-body poses and to observable depth cues using non-linear regression mappings. Our method is able to robustly detect changes of activity and adapt accordingly. We evaluate our method on a dataset containing 10 activities for 10 persons and show that we can track full-body pose and classify performed activities with a high precision which is discussed in the paper.