A novel framework for background music identification is proposed in this paper. Given a piece of audio signals that mixes background music with speech/noise, we identify the music part with source music data. Conventional methods that take the whole audio signals for identification are inappropriate in terms of efficiency and accuracy. In our framework, the audio content is filtered through speech center cancellation and noise removal to extract clear music segments. To identify these music segments, we use a compact feature representation and efficient similarity measurement based on the min-hash theory. The results of experiments on the RWC music database show a promising direction.