Can we trust without any reliable truth information? Most trust architectures work in a similar way: a trustor makes some observations, rates the trustee, and makes recommendations to his friends. When he faces a new case, he checks his trust table and uses recommendations given by trustworthy friends to decide whether he will undertake a given action. But what if the observations that are used to update the trust tables are wrong? How to deal with what we call the "uncertainty of the truth"? This paper presents how people that publish and remove virtual tags are able to create trust relations between them. A simulator as well as a concrete and widely deployed application have been used to validate our model. We observed good and encouraging results in general, but also some weaknesses, brought out through specific scenarios.