: As Virtual Environments applications become more complex, there is a need to interpret user interaction in terms of high-level concepts. In this paper, we investigate the relations between conceptual representations of actions and their physical simulation in virtual worlds. We have developed a model inspired from Natural Language Processing research in the linguistic interpretation of dynamic scenes. Our experiments are based on a real time animation software, which has been enhanced with a symbolic information processing layer, originally developed for NLP-based animation. We report the implementation of a high-level interpretation module that is able to recognise complex actions from low-level physical events in the virtual world and discuss its performance as well as directions for further developments.