Dance video is one of the important types of narrative videos with semantic rich content. This paper proposes a new meta model, Dance Video Content Model (DVCM) to represent the expressive semantics of the dance videos at multiple granularity levels. The DVCM is designed based on the concepts such as video, shot, segment, event and object, which are the components of MPEG-7 MDS. This paper introduces a new relationship type called Temporal Semantic Relationship to infer the semantic relationships between the dance video objects. Inverted file based index is created to reduce the search time of the dance queries. The effectiveness of containment queries using precision and recall is depicted.