We propose a method for human activity recognition in videos, based on shape analysis. We define local shape descriptors for interest points on the detected contour of the human action and build an action descriptor using a Bag of Features method. We also use the temporal relation among matching interest points across successive video frames. Further, an SVM is trained on these action descriptors to classify the activity in the scene. The method is invariant to the length of the video sequence, and hence it is suitable in online activity recognition. We have demonstrated the results on an action database consisting of nine actions like walk, jump, bend, etc., by twenty people, in indoor and outdoor scenarios. The proposed method achieves an accuracy of 87%, and is comparable to other state-of-the-art methods.