Automated evaluation is crucial in the context of automated text summaries, as is the case with evaluation of any of the language technologies. In this paper we present a Generative Modeling framework for evaluation of content of summaries. We used two simple alternatives to identifying signature-terms from the reference summaries based on model consistency and Parts-Of-Speech (POS) features. By using a Generative Modeling approach we capture the sentence level presence of these signature-terms in peer summaries. We show that parts-of-speech such as noun and verb, give simple and robust method to signatureterm identification for the Generative Modeling approach. We also show that having a large set of ‘significant signature-terms’ is better than a small set of ‘strong signature-terms’ for our approach. Our results show that the generative modeling approach is indeed promising — providing high correlations with manual evaluations — and further investigation of signature-te...