Sciweavers

NIPS
2008

A Scalable Hierarchical Distributed Language Model

14 years 1 months ago
A Scalable Hierarchical Distributed Language Model
Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models. The main drawback of NPLMs is their extremely long training and testing times. Morin and Bengio have proposed a hierarchical language model built around a binary tree of words, which was two orders of magnitude faster than the nonhierarchical model it was based on. However, it performed considerably worse than its non-hierarchical counterpart in spite of using a word tree created using expert knowledge. We introduce a fast hierarchical language model along with a simple feature-based algorithm for automatic construction of word trees from the data. We then show that the resulting models can outperform non-hierarchical neural models as well as the best n-gram models.
Andriy Mnih, Geoffrey E. Hinton
Added 30 Oct 2010
Updated 30 Oct 2010
Type Conference
Year 2008
Where NIPS
Authors Andriy Mnih, Geoffrey E. Hinton
Comments (0)