Abstract. In this project we present a framework for a multi-touch surface using multiple cameras. With an overhead camera and side-mounted camera we determine the three dimensional coordinates of the fingertips and detect touch events. We interpret these events as hand gestures which can be generalized into commands for manipulating applications. We offer an example application of a multi-touch finger painting program. 1 Motivation Traditional human input devices such as the keyboard and mouse are not sufficiently expressive to capture more natural and intuitive hand gestures. A richer gestural vocabulary may be enabled through devices that capture and understand more gestural information. Interactive tables that detect fingertip touches and interpret these events as gestures have succeeded to provide a richer input alternative. Touch detection estimates in overhead camera based systems suffer in depth resolution, causing a system to report touches when a user’s finger hovers near t...
Itai Katz, Kevin Gabayan, Hamid K. Aghajan