In this paper we combine a state of the art eye center locator and a new eye corner locator into a system which estimates the visual gaze of a user in a controlled environment (e.g. sitting in front of a screen). In order to reduce to a minimum the computational costs, the eye corner locator is built upon the same technology of the eye center locator, tweaked for the specific task. If high mapping precision is not a priority of the application, we claim that the system can achieve acceptable accuracy without the requirements of additional dedicated hardware. We believe that this could bring new gaze based methodologies for human-computer interactions into the mainstream.