Sciweavers

ACL
2015

On metric embedding for boosting semantic similarity computations

8 years 7 months ago
On metric embedding for boosting semantic similarity computations
Computing pairwise word semantic similarity is widely used and serves as a building block in many tasks in NLP. In this paper, we explore the embedding of the shortest-path metrics from a knowledge base (Wordnet) into the Hamming hypercube, in order to enhance the computation performance. We show that, although an isometric embedding is untractable, it is possible to achieve good non-isometric embeddings. We report a speedup of three orders of magnitude for the task of computing Leacock and Chodorow (LCH) similarity while keeping strong correlations (r = .819, ρ = .826).
Julien Subercaze, Christophe Gravier, Fréd&
Added 13 Apr 2016
Updated 13 Apr 2016
Type Journal
Year 2015
Where ACL
Authors Julien Subercaze, Christophe Gravier, Frédérique Laforest
Comments (0)