We present our current research on the implementation of gaze as an efficient and usable pointing modality supplementary to speech, for interacting with augmented objects in our daily environment or large displays, especially immersive virtual reality environments, such as reality centres and caves. We are also addressing issues relating to the use of gaze as the main interaction input modality. We have designed and developed two operational user interfaces: one for providing motor-disabled users with easy gaze-based access to map applications and graphical software; the other for iteratively testing and improving the usability of gaze-contingent displays.