Reputation systems aggregate mutual feedback of interacting peers into a "reputation" metric for each participant. This is then available to prospective service "requesters" (clients) for the purpose of evaluation and subsequent selection of potential service "providers" (servers). For a reputation framework to be effective, it is paramount for both the individual feedback and the reputation storage mechanisms to be trusted and able to deal with faulty behavior of participants such as "ballot stuffing" (un-earned positive feedback) and "bad-mouthing" (incorrect negative feedback). While, in human-driven (e.g. Ebay) environments, these issues are dealt with by hired personnel, on a case by case basis, in automated environments, this ad-hoc manner of handling is likely not acceptable. Stronger, secure mechanisms of trust are required. In this paper we propose a solution for securing reputation mechanisms in computing markets and grids whe...