This paper describes a viewpoint invariant learningbased method for counting people in crowds from a single camera. Our method takes into account feature normalization to deal with perspective projection and different camera orientation. The training features include edge orientation and blob size histograms resulted from edge detection and background subtraction. A density map that measures the relative size of individuals and a global scale measuring camera orientation are estimated and used for feature normalization. The relationship between the feature histograms and the number of pedestrians in the crowds is learned from labeled training data. Experimental results from different sites with different camera orientation demonstrate the performance and the potential of our method. .