Because of privacy concerns, agents may not want to reveal information that could be of use in problem solving. As a result, there are potentially important tradeoffs between maintaining privacy and enhancing search efficiency in these situations. In this work we show how quantitative assessments of privacy loss can be made within the framework of distributed constraint satisfaction. We also show how agents can make inferences about other agents' problems or subproblems from communications that carry no explicit private information. This can be done using constraint-based reasoning in a framework consisting of an ordinary CSP, which is only partly known, and a system of "shadow CSPs" that represent various forms of possibilistic knowledge. This kind of reasoning in combination with arc consistency processing can speed up search under conditions of limited communication, at the same time potentially undermining privacy. These effects are demonstrated in a simplified meet...
Richard J. Wallace, Eugene C. Freuder