— This paper puts forward an approach for a mobile robot to recognize the human’s manipulative actions from different single camera views. While most of the related work in action recognition assume a fixed static camera view that is the same for training and testing, such kind of constraints do not apply for mobile robot companions. We propose a recognition scheme that is able to generalize an action model, that has been learned from a very few data items observed from a single camera view, to variant view points and different settings. We tackle the problem of compensating the view dependence of 2D motion models on three different levels. Firstly, we pre-segment the trajectories based on an object vicinity that depends on the camera tilt and object detections. Secondly, an interactive feature vector is designed that represents the relative movements between the human hand and the objects. Thirdly, we propose an adaptive HMM-based matching process that is based on a particle fil...