In this paper, we present a novel maximum correlation sample subspace method and apply it to human face detection [1] in still images. The algorithm starts by projecting all the training samples onto each sample and selects the sample with the largest accumulated projection as the first subspace base vector. After a base vector is selected, all other samples are made orthogonal to the current base vector and which is in turn used to form the training samples for learning the next base vector. Each subspace base is created by a one-pass process and therefore the method is computationally very efficient. These bases form a transform and we use it to derive discriminative features for face detection by training a support vector machine classifier. We perform testing on both CMU and MIT face detection image data sets. Extensive experiments demonstrate that our results are comparable to those published in state of the art literature.