Sciweavers

113 search results - page 7 / 23
» A Dataset for Assessing Machine Translation Evaluation Metri...
Sort
View
MT
2010
110views more  MT 2010»
13 years 5 months ago
Metrics for MT evaluation: evaluating reordering
Translating between dissimilar languages requires an account of the use of divergent word orders when expressing the same semantic content. Reordering poses a serious problem for s...
Alexandra Birch, Miles Osborne, Phil Blunsom
EMNLP
2008
13 years 8 months ago
Decomposability of Translation Metrics for Improved Evaluation and Efficient Algorithms
BLEU is the de facto standard for evaluation and development of statistical machine translation systems. We describe three real-world situations involving comparisons between diff...
David Chiang, Steve DeNeefe, Yee Seng Chan, Hwee T...
SDL
2007
152views Hardware» more  SDL 2007»
13 years 8 months ago
TTCN-3 Quality Engineering: Using Learning Techniques to Evaluate Metric Sets
Software metrics are an essential means to assess software quality. For the assessment of software quality, typically sets of complementing metrics are used since individual metric...
Edith Werner, Jens Grabowski, Helmut Neukirchen, N...
ICTAI
2010
IEEE
13 years 5 months ago
Support Vector Methods for Sentence Level Machine Translation Evaluation
Recent work in the field of machine translation (MT) evaluation suggests that sentence level evaluation based on machine learning (ML) can outperform the standard metrics such as B...
Antoine Veillard, Elvina Melissa, Cassandra Theodo...
LREC
2008
111views Education» more  LREC 2008»
13 years 8 months ago
Sensitivity of Automated MT Evaluation Metrics on Higher Quality MT Output: BLEU vs Task-Based Evaluation Methods
We report the results of an experiment to assess the ability of automated MT evaluation metrics to remain sensitive to variations in MT quality as the average quality of the compa...
Bogdan Babych, Anthony Hartley