This paper discusses conditions under which some of the “higher level” mental concepts applicable to human beings might also be applicable to artificial agents. The key idea is that mental concepts (e.g. “believes”, “desires”, “intends”, “mood”, “emotion”, etc.) are grounded in assumptions about information processing architectures, and not merely Newell’s knowledge-level concepts, nor concepts based solely on Dennett’s “intentional stance.” 1 Describing synthetic agents McCarthy [McC79, McC95] gives reasons why we shall need to describe intelligent robots in mentalistic terms, and why such a robot will need some degree of self consciousness, and he has made suggestions regarding the notation that we and the robot might use to describe its states. This paper extends that work by focusing on the underlying “high level” architectures required to justify ascriptions of mentality. Which concepts are applicable to a system will depend on the architectur...