Sciweavers

ICASSP
2010
IEEE

Iterated smoothing for accelerated gradient convex minimization in signal processing

13 years 11 months ago
Iterated smoothing for accelerated gradient convex minimization in signal processing
In this paper, we consider the problem of minimizing a non-smooth convex problem using first-order methods. The number of iterations required to guarantee a certain accuracy for such problems is often excessive and several methods, e.g., restart methods, have been proposed to speed-up the convergence. In the restart method a smoothness parameter is adjusted such that smoother approximations of the original non-smooth problem are solved in a sequence before the original, and the previous estimate is used as the starting point each time. Instead of adjusting the smoothness parameter after each restart, we propose a method where we modify the smoothness parameter in each iteration. We prove convergence and provide simulation examples for two typical signal processing applications, namely total variation denoising and 1-norm minimization. The simulations demonstrate that the proposed method require fewer iterations and show lower complexity compared to the restart method.
Tobias Lindstrøm Jensen, Jan Østerga
Added 06 Dec 2010
Updated 06 Dec 2010
Type Conference
Year 2010
Where ICASSP
Authors Tobias Lindstrøm Jensen, Jan Østergaard, Søren Holdt Jensen
Comments (0)