This paper describes our first participation in TRECVID. We took part in the search task and submitted two interactive runs. Both of them are of Type c, and use no ASR/MT output information: run id I c 2 UQ1 1 uses 64-dimensional visual features and run id I c 2 UQ2 2 uses 32-dimensional visual features for retrieval. Based on the evaluation results with the benchmark video data, we observed that there is no significant difference between these two runs in terms of what measured. Meanwhile, utilizing visual information only seems not easy to capture the high-level semantic similarity required by the search topics of this year, so more sophisticated approaches such as multi-modality fusion are needed to improve the system performance.