Sciweavers

119 search results - page 6 / 24
» Better Evaluation Metrics Lead to Better Machine Translation
Sort
View
LREC
2010
179views Education» more  LREC 2010»
13 years 10 months ago
Appraise: An Open-Source Toolkit for Manual Phrase-Based Evaluation of Translations
We describe a focused effort to investigate the performance of phrase-based, human evaluation of machine translation output achieving a high annotator agreement. We define phrase-...
Christian Federmann
LREC
2008
89views Education» more  LREC 2008»
13 years 10 months ago
Parser Evaluation and the BNC: Evaluating 4 constituency parsers with 3 metrics
We evaluate discriminative parse reranking and parser self-training on a new English test set using four versions of the Charniak parser and a variety of parser evaluation metrics...
Jennifer Foster, Josef van Genabith
ACL
2009
13 years 6 months ago
Revisiting Pivot Language Approach for Machine Translation
This paper revisits the pivot language approach for machine translation. First, we investigate three different methods for pivot translation. Then we employ a hybrid method combin...
Hua Wu, Haifeng Wang
ACL
2007
13 years 10 months ago
Boosting Statistical Machine Translation by Lemmatization and Linear Interpolation
Data sparseness is one of the factors that degrade statistical machine translation (SMT). Existing work has shown that using morphosyntactic information is an effective solution t...
Ruiqiang Zhang, Eiichiro Sumita
ACL
2006
13 years 10 months ago
Minimum Risk Annealing for Training Log-Linear Models
When training the parameters for a natural language system, one would prefer to minimize 1-best loss (error) on an evaluation set. Since the error surface for many natural languag...
David A. Smith, Jason Eisner