We present the notion of Ranking for evaluation of two-class classifiers. Ranking is based on using the ordering information contained in the output of a scoring model, rather than just setting a classification threshold. Using this ordering information, we can evaluate the model's performance with regard to complex goal functions, such as the cor rect identification of the k most likely and/or least likely to be responders out of a group of potential customers. Using Ranking we can also obtain increased efficiency in comparing classifiers and selecting the better one even for the standard goal of achieving a minimal misclassification rate. This feature of Ranking is illustrated by simulation results. We also discuss it theoretically, showing the similarity in structure between the reducible (model dependent) parts of the Linear Ranking score and the standard Misclassification Rate score, and characterizing the situations when we expect Linear Ranking to outperform Misclassificat...