Most Artificial Intelligence programs lack generality because they reason with a single domain theory that is tailored for a specific task and embodies a host of implicit assumptions. Contexts have been proposed as an effective solution to this problem by providing a mechanism for explicitly stating the assumptions underlying a domain theory. In addition, contexts can be used to focus reasoning, allow the representation of mutually incoherent domain theories, lift, axioms from one context into another, and transcend a context. In this paper we develop a simple propositional logic of context suitable for representing and reasoning with multiple domain theories. We introduce contexts as modal operators, and allow different, contexts to have different vocabularies. We analyze the computational properties of the logic, providing the central computational justification for the use of contexts. We show how the logic effectively handles the common uses of contexts. We also discuss the extens...
P. Pandurang Nayak