Sciweavers

EMNLP
2011

Training a Log-Linear Parser with Loss Functions via Softmax-Margin

12 years 10 months ago
Training a Log-Linear Parser with Loss Functions via Softmax-Margin
Log-linear parsing models are often trained by optimizing likelihood, but we would prefer to optimise for a task-specific metric like Fmeasure. Softmax-margin is a convex objective for such models that minimises a bound on expected risk for a given loss function, but its na¨ıve application requires the loss to decompose over the predicted structure, which is not true of F-measure. We use softmaxmargin to optimise a log-linear CCG parser for a variety of loss functions, and demonstrate a novel dynamic programming algorithm that enables us to use it with F-measure, leading to substantial gains in accuracy on CCGBank. When we embed our loss-trained parser into a larger model that includes supertagging features incorporated via belief propagation, we obtain further improvements and achieve a labelled/unlabelled dependency F-measure of 89.3%/94.0% on gold part-of-speech tags, and 87.2%/92.8% on automatic part-of-speech tags, the best reported results for this task.
Michael Auli, Adam Lopez
Added 20 Dec 2011
Updated 20 Dec 2011
Type Journal
Year 2011
Where EMNLP
Authors Michael Auli, Adam Lopez
Comments (0)