In this paper we introduce a new ranking algorithm, called Collaborative Judgement (CJ), that takes into account peer opinions of agents and/or humans on objects (e.g. products, exams, papers) as well as peer judgements over those opinions. The combination of these two types of information has not been studied in previous work in order to produce object rankings. We apply CJ to the use case of scientific paper assessment and we validate it over simulated data. The results show that the rankings produced by our algorithm improve current scientific paper ranking practice based on averages of opinions weighted by their reviewers’ self-assessments.