This paper presents a method for automatic multimodal person authentication using speech, face and visual speech modalities. The proposed method uses the motion information to loc...
This paper reports on work-in-progress to better understand how users visually interact with hierarchically organized semantic information. Experimental reaction time and eye move...
For decades, Fitts' law (1954) has been used to model pointing time in user interfaces. As with any rapid motor act, faster pointing movements result in increased errors. But...
Jacob O. Wobbrock, Edward Cutrell, Susumu Harada, ...
Current computational models of bottom-up and top-down components of attention are predictive of eye movements across a range of stimuli and of simple, fixed visual tasks (such a...
Saccadic averaging is the phenomenon that two simultaneously presented retinal inputs result in a saccade with an endpoint located on an intermediate position between the two stimu...