This paper presents a new strategy for designing the parallel phone recognizers for spoken language recognition. Given a collection of parallel phone recognizers, we select a subset of phones from each phone recognizer for each target language to construct a target-oriented phone tokenizer (TOPT). As a result, the collection of target-oriented phone tokenizers is more effective than the original parallel phone recognizers. This approach improves system performance significantly without requesting for additional transcribed training samples. We validate the effectiveness of the proposed strategy within the framework of the parallel phone recognizer followed by vector space modeling backend, or PPR-VSM. We achieve equal-error-rate of 2.21% and 3.65% on the 2003 and 2005 NIST LRE databases, respectively, for 30-second trials.