This paper describes an original method for evaluating peer review in online systems by calculating the helpfulness of an individual reviewer's response. We focus on the development of specific and machine scoreable indicators for quality in online peer review. Categories and Subject Descriptors K.4.3 [Organizational Impacts] -- Computer-supported collaborative work General Terms Algorithms, Management, Measurement, Design, Human Factors, Theory Keywords Review, Education, Information Architecture, Assessment, Reputation, Workflow, Learning, Peer Review, Helpfulness