In this work we describe a set of visual routines, which support a novel sensor free interface between a human and virtual objects. The visual routines detect, track and interpret a gesture of pointing in real time. This is solved in the context of a scenario, which enables a user to activate virtual objects displayed on a projective screen. By changing a direction of pointing with an extended towards the screen arm, the user controls the motion of virtual objects. The vision system consists of a single overhead view camera and exploits a priori knowledge of the human body appearance, interactive context and environment. The system operates in real time on a standard Pentium-PC platform.