An open problem in multiobjective optimization using the Pareto optimality criteria, is how to evaluate the performance of different evolutionary algorithms that solve multi– objective problems. As the output of these algorithms is a non–dominated set (NS), this problem can be reduced to evaluate what NS is better than the others based on their projection on the objective space. In this work we propose a new performance measure for the evaluation of non–dominated sets, that ranks a set of NSs based on their convergence and dispersion. Its evaluations of the NSs agree with intuition. Also, we introduce a benchmark of test cases to evaluate performance measures, that considers several topologies of the Pareto Front. Categories and Subject Descriptors I.2.m [Artificial Inteligence]: Miscellaneous General Terms Measurement Keywords Multiobjective optimization, performance measures, Pareto optimality