Human matching between different fields of view is a difficult problem in intelligent video surveillance; whereas fusing multiple features has become a strong tool to solve it. In order to guide the fusion scheme, it is necessary to evaluate the matching performance of these features. In this paper, four typical features are chosen for the evaluation. They are the Color Histogram, UV Chromaticity, Major Color Spectrum Histogram, and Scale-Invariant Features (SIFT). Quantities of video data are collected to test their general accuracy, robustness, and real-time applicability. The robustness is measured under the conditions of illumination changes, Gaussian and salt noises, foreground errors, resolution changes, and camera angle differences. The experimental results show that the four features bear distinctive performances under the different conditions, which will provide important references for the feature fusion methods.