Simple binary patterns have been successfully used for extracting feature representations for visual object classification. In this paper, we present a method to learn a set of discriminative tri-value patterns for projecting high dimensional raw visual inputs into a low dimensional subspace for tasks such as face detection. Unlike previous methods that use predefined simple transform bases to generate tens of thousands features first and then use machine learning to select the most useful features, our method attempts to learn discriminative transform bases directly. Since it would be extremely hard to develop analytical solutions, we define an objective function that can be solved using simulated annealing. To reduce the search space, we impose sparseness and smoothness constraints on the transform bases. Experimental results demonstrate that our method is effective and provides an alternative approach to effective visual object classification.