Sciweavers

NLE
2010

A non-negative tensor factorization model for selectional preference induction

13 years 10 months ago
A non-negative tensor factorization model for selectional preference induction
Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Up till now, most algorithms use two-way cooccurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate a tensor factorization method called non-negative tensor factorization to build a model of three-way cooccurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that non-negative tensor factorization is a promising tool for NLP.
Tim Van de Cruys
Added 29 Jan 2011
Updated 29 Jan 2011
Type Journal
Year 2010
Where NLE
Authors Tim Van de Cruys
Comments (0)