Sheffield’s contribution to the interactive cross language information retrieval track took the approach of comparing user’s abilities at judging the relevance of machine tran...
For the first interactive Cross-Language Evaluation Forum, the Maryland team focused on comparison of term-for-term gloss translation with full machine translation for the documen...
This paper describes the experiments of our team for CLEF 2001, which includes both official and post-submission runs. We took part in the monolingual task, for Dutch, German, and...