— In this paper we present a gradient method to iteratively update local controllers of a distributed linear system driven by stochastic disturbances. The control objective is to minimize the sum of the variances of states and inputs in all nodes. We show that the gradients of this objective can be estimated distributively using data from a forward simulation of the system model and a backward simulation of the adjoint equations. Iterative updates of local controllers using the gradient estimates gives convergence towards a locally optimal distributed controller.