Errors in machine translations of English-Iraqi Arabic dialogues were analyzed at two different points in the systems development using HTER methods to identify errors and human a...
Sherri L. Condon, Dan Parvaz, John S. Aberdeen, Ch...
Abstract. Existing automated MT evaluation methods often require expert human translations. These are produced for every language pair evaluated and, due to this expense, subsequen...
This paper presents METEOR-NEXT, an extended version of the METEOR metric designed to have high correlation with postediting measures of machine translation quality. We describe c...
We present new direct data analysis showing that dynamically-built context-dependent phrasal translation lexicons are more useful resources for phrase-based statistical machine tr...
We propose three new features for MT evaluation: source-sentence constrained n-gram precision, source-sentence reordering metrics, and discriminative unigram precision, as well as...