Sciweavers

SIAMJO
2000

Gradient Convergence in Gradient methods with Errors

14 years 11 days ago
Gradient Convergence in Gradient methods with Errors
We consider the gradient method xt+1 = xt + t(st + wt), where st is a descent direction of a function f : n and wt is a deterministic or stochastic error. We assume that f is Lipschitz continuous, that the stepsize t diminishes to 0, and that st and wt satisfy standard conditions. We show that either f(xt) - or f(xt) converges to a finite value and f(xt) 0 (with probability 1 in the stochastic case), and in doing so, we remove various boundedness conditions that are assumed in existing results, such as boundedness from below of f, boundedness of f(xt), or boundedness of xt. Key words. gradient methods, incremental gradient methods, stochastic approximation, gradient convergence AMS subject classifications. 62L20, 903C0 PII. S1052623497331063
Dimitri P. Bertsekas, John N. Tsitsiklis
Added 19 Dec 2010
Updated 19 Dec 2010
Type Journal
Year 2000
Where SIAMJO
Authors Dimitri P. Bertsekas, John N. Tsitsiklis
Comments (0)