Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are b...
Eric H. Huang, Richard Socher, Christopher D. Mann...
Current vector-space models of lexical semantics create a single "prototype" vector to represent the meaning of a word. However, due to lexical ambiguity, encoding word ...
This paper presents a novel schema to address the polysemy of visual words in the widely used bag-of-words model. As a visual word may have multiple meanings, we show it is possib...
In the context of biomedical information retrieval (IR), this paper explores the relationship between the document’s global context and the query’s local context in an attempt ...
We present a new part-of-speech tagger that demonstrates the following ideas: (i) explicit use of both preceding and following tag contexts via a dependency network representation...
Kristina Toutanova, Dan Klein, Christopher D. Mann...