Product Distribution (PD) theory was recently developed as a framework for analyzing and optimizing distributed systems. In this paper we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MAS’s), i.e., for distributed stochastic optimization using MAS’s. A traditional way to perform such optimization is to have each agent run a Reinforcement Learning (RL) algorithm. PD theory provides an alternative based upon using a variant of Newton’s method operating on the agents’ probability distributions. We compare this alternative to RL-based search in three sets of computer experiments. The PD-theory-based approach outperforms the RL-based scheme in all three domains.