Sciweavers

MP
2016
8 years 8 months ago
Parallel coordinate descent methods for big data optimization
In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially sepa...
Peter Richtárik, Martin Takác
MP
2016
8 years 8 months ago
A semismooth Newton-CG based dual PPA for matrix spectral norm approximation problems
We consider a class of matrix spectral norm approximation problems for finding an affine combination of given matrices having the minimal spectral norm subject to some prescribed ...
Caihua Chen, Yong-Jin Liu, Defeng Sun, Kim-Chuan T...
MP
2016
8 years 8 months ago
The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
The alternating direction method of multipliers (ADMM) is now widely used in many fields, and its convergence was proved when two blocks of variables are alternatively updated. It...
Caihua Chen, Bingsheng He, Yinyu Ye, Xiaoming Yuan
MP
2016
8 years 8 months ago
Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
We improve a recent guarantee of Bach and Moulines on the linear convergence of SGD for smooth and strongly convex objectives, reducing a quadratic dependence on the strong convex...
Deanna Needell, Nathan Srebro, Rachel Ward
MP
2016
8 years 8 months ago
Error bounds for mixed integer linear optimization problems
We introduce computable a-priori and a-posteriori error bounds for optimality and feasibility of a point generated as the rounding of an optimal point of the LP relaxation of a mi...
Oliver Stein
MP
2016
8 years 8 months ago
Solving variational inequalities with monotone operators on domains given by Linear Minimization Oracles
The standard algorithms for solving large-scale convex-concave saddle point problems, or, more generally, variational inequalities with monotone operators, are proximal type algor...
Anatoli Juditsky, Arkadi Nemirovski
MP
2016
8 years 8 months ago
Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runt...
Shai Shalev-Shwartz, Tong Zhang 0001
MP
2016
8 years 8 months ago
Accelerated gradient methods for nonconvex nonlinear and stochastic programming
In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stoch...
Saeed Ghadimi, Guanghui Lan