The problem of human activity recognition via visual stimuli can be approached using manifold learning, since the silhouette (binary) images of a person undergoing a smooth motion can be represented as a manifold in the image space. While manifold learning methods allow the characterization of the activity manifolds, performing activity recognition requires distinguishing between manifolds. This invariably involves the extrapolation of learned activity manifolds to new silhouettes-- a task that is not fully addressed in the literature. This paper investigates and compares methods for the extrapolation of learned manifolds within the context of activity recognition. Also, the problem of obtaining dense samples for learning human silhouette manifolds is addressed.