We consider the gradient method xt+1 = xt + t(st + wt), where st is a descent direction of a function f : n and wt is a deterministic or stochastic error. We assume that f is Lip...
The asymptotic behavior of stochastic gradient algorithms is studied. Relying on some results of differential geometry (Lojasiewicz gradient inequality), the almost sure pointconve...
Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of i...
The convergence rate is analyzed for the sparse reconstruction by separable approximation (SpaRSA) algorithm for minimizing a sum f(x) + ψ(x), where f is smooth and ψ is convex, ...
In this paper, the complex-step method is applied in the setting of numerical optimisation problems involving dynamical systems modelled as nonlinear differential equations. The m...