: This paper presents a gesture-based Human-Computer Interface (HCI) to navigate a learning object repository mapped in a 3D virtual environment. With this interface, the user can access the learning objects by controlling an avatar car using gestures. The Haar-like features and the AdaBoost learning algorithm are used for our gesture recognition to achieve real-time performance and high recognition accuracy. The learning objects are represented by different traffic signs, which are grouped along the virtual highways. Compared with traditional HCI devices such as keyboards, it is more intuitive and interesting for users using hand gestures to communicate with the virtual environments.
Qing Chen, Abu Saleh Md. Mahfujur Rahman, Xiaojun