Action recognition is an important but challenging problem in video analytics with a number of solutions proposed to date. However, even if a reliable model for action representation is identified and an accurate metric for comparing actions is developed, it is still unclear to how many video frames should the representation and comparison apply. In this paper, we develop a method to detect when actions change, i.e., the temporal boundaries of actions, without classifying the actions. We use a silhouette-based framework for action representation and comparison, both centered around dimensionality reduction using covariance descriptors. We use a nonparametric statistical framework to learn the distribution of the distance between covariance descriptors and detect action changes as covariance-distance outliers. Experimental results on ground-truth