Complex human motion sequences (such as dances) are typically analyzed by segmenting them into shorter motion sequences, called gestures. However, this segmentation process is subjective, and varies considerably from one human observer to another. In this paper, we propose an algorithm called Hierarchical Activity Segmentation. This algorithm employs a dynamic hierarchical layered structure to represent the human anatomy, and uses low-level motion parameters to characterize motion in the various layers of this hierarchy, which correspond to different segments of the human body. This characterization is used with a na?ve Bayesian classifier to derive observer profiles from empirical data. Then those profiles are used to predict how observers will segment gestures in other motion sequences. When the predictions were tested with a library of 3D motion capture sequences, which were segmented by 5 human observers they were found to be reasonably accurate.