The computational grounding problem – the gap between the mental models of an agent and its computational model – is a well known problem within the agent research community. For years, it has been believed that obscure ontological status is the principal cause. This acute problem hampers the speed of agent oriented development. In this work, we propose an alternative way for modelling intelligent agents through the concepts of observation and expectation to avoid this problem.