The problem of providing tools to support legally valid negotiations between agents is becoming more and more critical. Agents are supposed to perform crucial tasks autonomously; however, they cannot exploit an extensive set of laws since the development of a full legal corpus for the computer world is yet to come. In this work we present an innovative model of interaction between agents that leads to an increase in the level of trust in negotiation-intensive MASs. In particular, we address some common problems related to trust and security in real-world negotiations and outline a set of abstractions that we can use to increase the level of trust that we can expect from agreements with third parties. Categories and Subject Descriptors I.2 [Artificial Intelligence]: Multiagent Systems General Terms Legal Aspects, Security Keywords Security, Privacy, Trust, Multiagent Systems