Stochastic optimization algorithms typically use learning rate schedules that behave asymptotically as (t) = 0=t. The ensemble dynamics (Leen and Moody, 1993) for such algorithms provides an easy path to results on mean squared weight error and asymptotic normality. We apply this approach to stochastic gradient algorithms with momentum. We show that at late times, learning is governed by an e ective learning rate e = 0=(1 ; ) where is the momentum parameter. We describe the behavior of the asymptotic weight error and give conditions on e that insure optimal convergence speed. Finally, we use the results to develop an adaptive form of momentum that achieves optimal convergence speed independent of 0.
Todd K. Leen, Genevieve B. Orr