Decentralized Markov Decision Processes are a powerful general model of decentralized, cooperative multi-agent problem solving. The high complexity of the general problem leads to a focus on restricted models. While worstcase hardness of such reduced problems is often better, less is known about the actual difficulty of given instances. We show tight connections between the structure of agent interactions and the essential dimensionality of various problems. Bounds are placed on problem difficulty, given restrictions on the type and number of interactions between agents. These bounds arise from a bilinear programming formulation of the problem; from such a formulation, a more compact reduced form can be automatically generated, and the original problem rewritten to take advantage of the reduction.