In this paper, we present an unconstrained visual gaze estimation system. The proposed method extracts the visual field of view of a person looking at a target scene in order to estimate the approximate location of interest (visual gaze). The novelty of the system is the joint use of head pose and eye location information to fine tune the visual gaze estimated by the head pose only, so that the system can be used in multiple scenarios. The improvements obtained by the proposed approach are validated using the Boston University head pose dataset, on which the standard deviation of the