Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are b...
Eric H. Huang, Richard Socher, Christopher D. Mann...
Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and...
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Da...
We challenge the NLP community to participate in a large-scale, distributed effort to design and build resources for developing and evaluating solutions to new and existing NLP ta...
Lexical resources such as WordNet and the EDR electronic dictionary (EDR) have been used in several NLP tasks. Probably partly due to the fact that the EDR is not freely available...
Lonneke van der Plas, Vincenzo Pallotta, Martin Ra...
Previous work demonstrated that web counts can be used to approximate bigram frequencies, and thus should be useful for a wide variety of NLP tasks. So far, only two generation ta...
Convolution kernels, such as sequence and tree kernels, are advantageous for both the concept and accuracy of many natural language processing (NLP) tasks. Experiments have, howev...
Web count statistics gathered from search engines have been widely used as a resource in a variety of NLP tasks. For some tasks, however, the information they exploit is not fine-...
In recent years, language resources acquired from the Web are released, and these data improve the performance of applications in several NLP tasks. Although the language resource...