This papers surveys different approaches to evaluation of web search summaries and describes experiments conducted at Yandex. We hypothesize that the complex task of snippet evaluation is best solved with a range of different methods. Automation of evaluation based on available manual assessments and clickthrough analysis is a promising direction.