In this paper, we investigate how deviation in evaluation activities may reveal bias on the part of reviewers and controversy on the part of evaluated objects. We focus on a `datacentric approach' where the evaluation data is assumed to represent the `ground truth'. The standard statistical approaches take evaluation and deviation at face value. We argue that attention should be paid to the subjectivity of evaluation, judging the evaluation score not just on `what is being said' (deviation), but also on `who says it' (reviewer) as well as on `whom it is said about' (object). Furthermore, we observe that bias and controversy are mutually dependent, as there is more bias if there is higher deviation on a less controversial object. To address this mutual dependency, we propose a reinforcement model to identify bias and controversy. We test our model on real-life data to verify its applicability. Categories and Subject Descriptors: H.4 [Information Systems Applica...