We present a method for tracking the 3D position of a nger, using a single camera placed several meters away from the user. After skin detection, we use motion to identify the gesticulating arm. The nger point is found by analyzing the arm's outline. To derive a 3D trajectory, we rst track 2D positions of the user's elbow and shoulder. Given that a human's upper arm and lower arm have consistent length, we observe that the possible locations of a nger and elbow form two spheres with constant radii. From the previously tracked body points, we can reconstruct these spheres, computing the 3D position of the elbow and nger. These steps are fully automated and do not require human intervention. The system presented can be used as a visualization tool, or as a user input interface, in cases when the user would rather not be constrained by the camera system.