We study the use of kernel subspace methods that learn low-dimensional subspace representations for classification tasks. In particular, we propose a new method called kernel weighted nonlinear discriminant analysis (KWNDA) which possesses several appealing properties. First, like all kernel methods, it handles nonlinearity in a disciplined manner that is also computationally attractive. Second, by introducing weighting functions into the discriminant criterion, it outperforms existing kernel discriminant analysis methods in terms of the classification accuracy. Moreover, it also effectively deals with the small sample size problem. We empirically compare different subspace methods with respect to their classification performance of facial images based on the simple nearest neighbor rule. Experimental results show that KWNDA substantially outperforms competing linear as well as nonlinear subspace methods.