Sciweavers

NAACL
2010

Some Empirical Evidence for Annotation Noise in a Benchmarked Dataset

13 years 9 months ago
Some Empirical Evidence for Annotation Noise in a Benchmarked Dataset
A number of recent articles in computational linguistics venues called for a closer examination of the type of noise present in annotated datasets used for benchmarking (Reidsma and Carletta, 2008; Beigman Klebanov and Beigman, 2009). In particular, Beigman Klebanov and Beigman articulated a type of noise they call annotation noise and showed that in worst case such noise can severely degrade the generalization ability of a linear classifer (Beigman and Beigman Klebanov, 2009). In this paper, we provide quantitative empirical evidence for the existence of this type of noise in a recently benchmarked dataset. The proposed methodology can be used to zero in on unreliable instances, facilitating generation of cleaner gold standards for benchmarking.
Beata Beigman Klebanov, Eyal Beigman
Added 14 Feb 2011
Updated 14 Feb 2011
Type Journal
Year 2010
Where NAACL
Authors Beata Beigman Klebanov, Eyal Beigman
Comments (0)