The popular mel-frequency cepstral coefficients (MFCCs) capture a mixture of speaker-related, phonemic and channel information. Speaker-related information could be further broken down according to articulatory criteria. How these underlying components are exactly mixed in the features is not well understood. To this end, in this paper we aim at separating the spectra of glottal source and vocal tract using glottal inverse filtering, with an application to speaker recognition over telephone lines. Our experiments on the 10sec-10sec condition of the NIST 2006 SRE corpus suggest that the mel-frequency cepstrum of the voice source is not too useful for recognizing speakers. On the contrary, fusing the vocal tract spectrum with conventional MFCCs improves accuracy, suggesting that vocal tract information should be enhanced.