Abstract. This paper introduces a semantic representation for virtual prototyping in interactive virtual construction applications. The representation reflects semantic information about dynamic constraints to define objects’ modification and construction behavior as well as knowledge structures supporting multimodal interaction utilizing speech and gesture. It is conveniently defined using XML-based markup for virtual building parts. The semantic information is processed during runtime in two ways: Constraint graphs are mapped to a generalized data-flow network and scene-graph. Interaction knowledge is accessed and matched during multimodal analysis.