Sciweavers

ICASSP
2011
IEEE

Distributed training of large scale exponential language models

13 years 3 months ago
Distributed training of large scale exponential language models
Shrinkage-based exponential language models, such as the recently introduced Model M, have provided significant gains over a range of tasks [1]. Training such models requires a large amount of computational resources in terms of both time and memory. In this paper, we present a distributed training algorithm for such models based on the idea of cluster expansion [2]. Cluster expansion allows us to efficiently calculate the normalization and expectations terms required for Model M training by minimizing the computation needed between consecutive n-grams. We also show how the algorithm can be implemented in a distributed environment, greatly reducing the memory required per process and training time.
Abhinav Sethy, Stanley F. Chen, Bhuvana Ramabhadra
Added 20 Aug 2011
Updated 20 Aug 2011
Type Journal
Year 2011
Where ICASSP
Authors Abhinav Sethy, Stanley F. Chen, Bhuvana Ramabhadran
Comments (0)