It is often the case that agents within a system have distinct types of knowledge. Furthermore, whilst common goals may be agreed upon, the particular representations of the individual agents’ views of the world that they operate within may not always match. In this paper we provide a framework to allow different agents with different expertise to make individual contributions to an overall reasoning process, in order to make a decision about how to act to achieve some goal. Our framework is based on a model of argumentation that embeds inquiry dialogues within a process of practical reasoning. We combine two different approaches to argumentative reasoning and show not only how they can function together within a formal framework to provide richer interactions, but also how this facilitates reasoning across distributed agents who may each have different perspectives on the scenarios they operate in. Categories and Subject Descriptors I.2.11 [Distributed Artificial Intelligence]: [M...