In this paper, we present a novel method for human action recognition from any arbitrary view image sequence that uses the Cartesian component of optical flow velocity and human body silhouette feature vector information. We use principal component analysis (PCA) to reduce the higher dimensional silhouette feature space into lower dimensional feature space. The action region in an image frame represents Q-dimensional optical flow feature vector and R-dimensional silhouette feature vector. We represent each action using a set of hidden Markov models and we model each action for any viewing direction by using the combined (Q + R)-dimensional features at any instant of time. We perform experiments of the proposed method by using KU gesture database and manually captured data. Experimental results of different actions from any viewing direction are correctly classified by our method, which indicate the robustness of our view-independent method.