This paper presents our approaches and results of the four TRECVID 2008 tasks we participated in: high-level feature extraction, automatic video search, video copy detection, and rushes summarization. In high-level feature extraction, we jointly submitted our results with Columbia University. The four runs submitted through CityU aim to explore context-based concept fusion by modeling inter-concept relationship. The relationship is modeled not based on semantic reasoning, but by observing how concepts correlate to each other, either directly or indirectly, in LSCOM common annotation [1]. An observability space (OS) [2] is thus built on top of LSCOM [1] and VIREO374 [3] for performing concept fusion. Since 19 of the 20 concepts evaluated this year appeared in VIREO-374, we apply OS to re-rank the results of both old models from VIREO-374 and new models from a joint baseline submission with Columbia. - A CityU-HK1: re-rank A CU-run5 using OS