Sciweavers

SIAMJO
2008

Smooth Optimization with Approximate Gradient

14 years 14 days ago
Smooth Optimization with Approximate Gradient
We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.
Alexandre d'Aspremont
Added 14 Dec 2010
Updated 14 Dec 2010
Type Journal
Year 2008
Where SIAMJO
Authors Alexandre d'Aspremont
Comments (0)