Sciweavers

2007 search results - page 4 / 402
» MUC-3 evaluation metrics
Sort
View
ACL
2009
13 years 6 months ago
Robust Machine Translation Evaluation with Entailment Features
Existing evaluation metrics for machine translation lack crucial robustness: their correlations with human quality judgments vary considerably across languages and genres. We beli...
Sebastian Padó, Michel Galley, Daniel Juraf...
ANLP
1992
92views more  ANLP 1992»
13 years 9 months ago
Robust Processing of Real-World Natural-Language Texts
It is often assumed that when natural language processing meets the real world, the ideal of aiming for complete and correct interpretations has to be abandoned. However, our expe...
Jerry R. Hobbs, Douglas E. Appelt, John Bear, Mabr...
NAACL
2010
13 years 6 months ago
Extending the METEOR Machine Translation Evaluation Metric to the Phrase Level
This paper presents METEOR-NEXT, an extended version of the METEOR metric designed to have high correlation with postediting measures of machine translation quality. We describe c...
Michael J. Denkowski, Alon Lavie
EMNLP
2010
13 years 6 months ago
Automatic Evaluation of Translation Quality for Distant Language Pairs
Automatic evaluation of Machine Translation (MT) quality is essential to developing highquality MT systems. Various evaluation metrics have been proposed, and BLEU is now used as ...
Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhit...
TSE
1998
109views more  TSE 1998»
13 years 8 months ago
An Evaluation of the MOOD Set of Object-Oriented Software Metrics
—This paper describes the results of an investigation into a set of metrics for object-oriented design, called the MOOD metrics. The merits of each of the six MOOD metrics is dis...
Richard H. Carver, Steve Counsell, Reuben V. Nithi