We consider a distributed multi-agent network system where the goal is to minimize a sum of agent objective functions subject to a common set of constraints. For this problem, we propose a distributed subgradient algorithm in which each agent maintains an iterate sequence and in each iteration the latest iterate is communicated by the agent to its neighbors. Each agent then averages the received iterates with its own iterate, and then adjusts the iterate using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The focus of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm for diminishing and non-diminishing stepsizes. Under the additional condition that the mean of the errors diminish, we prove that with diminishing steps...
S. Sundhar Ram, Angelia Nedic, Venugopal V. Veerav