In ergodic MDPs we consider stationary distributions of policies that coincide in all but n states, in which one of two possible actions is chosen. We give conditions and formulas for linear dependence of the stationary distributions of n+2 such policies, and show some results about combinations and mixtures of policies. Key words: Markov decision process; Markov chain; stationary distribution 1991 MSC: Primary: 90C40, 60J10; Secondary: 60J20