Automatic generation of text summaries for spoken language faces the problem of containing incorrect words and passages due to speech recognition errors. This paper describes comparative experiments where passages with higher speech recognizer confidence scores are favored in the ranking process. Results show that a relative word error rate reduction of over 10% can be achieved while at the same time the accuracy of the summary improves markedly.