In extended video sequences, individual frames are grouped into shots which are defined as a sequence taken by a single camera, and related shots are grouped into scenes which are defined by a single dramatic event taken by a small number of related cameras. This hierarchical structure is deliberately constructed, dictated by the limitations and preferences of the human visual and memory systems. We present three novel high-level segmentation results derived from these considerations, some of which are analogous to those involved in the perception of the structure of music. First and primarily, we derive and demonstrate a method for measuring probable scene boundaries, by calculating a short term memory-based model of shot-to-shot "coherence". The detection of local minima in this continuous measure permits robust and flexible segmentation of the video into scenes, without the necessity for first aggregating shots into clusters. Second, and independently of the first, we the...
John R. Kender, Boon-Lock Yeo