The accuracy of collaborative filtering recommender systems largely depends on two factors: the quality of the recommendation algorithm and the nature of the available item ratings. In general, the more ratings are elicited from the users, the more effective the recommendations are. However, not all the ratings are equally useful and therefore, in order to minimize the users’ rating effort, only some of them should be requested or acquired. In this paper we consider several rating elicitation strategies and we evaluate their system utility, i.e., how the overall behavior of the system changes when these new ratings are added. We simulate the limited knowledge of users, i.e., not all the rating requests of the system are satisfied by the users, and we compare the capability of the considered strategies in requesting ratings for items that the user experienced. We show that different strategies can improve different aspects of the recommendation quality with respect to several me...