In open settings, the participants are autonomous and there is no central authority to ensure the felicity of their interactions. When agents interact in such settings, each relies upon being able to model the trustworthiness of the agents with whom it interacts. Fundamentally, such models must consider the past behavior of the other parties in order to predict their future behavior. Further, it is sensible for the agents to share information via referrals to trustworthy agents. Much progress has recently been made on probabilistic trust models including those that support the aggregation of information from multiple sources. However, current models do not support trust updates, leaving updates to be handled in an ad hoc manner. This paper proposes a trust representation that combines probabilities and certainty (defined as a function of a probability-certainty density function). Further, it offers a trust update mechanism to estimate the trustworthiness of referrers. This paper descr...
Chung-Wei Hang, Yonghong Wang, Munindar P. Singh