Classifying and analyzing human motion from a video is relatively common in many areas. Since the motion is carried out in 3D space, the 2D projection provided by a video is somewhat limiting. The question we are investigating in this article is how much information is actually lost when going from 3D to 2D and how this information loss depends on factors, such as viewpoint and tracking errors that inevitably will occur if the 2D sequences are analysed automatically.