In this paper, we focus on the coordination issues in a multiagent setting. Two coordination algorithms based on reinforcement learning are presented and theoretically analyzed. Our Fuzzy Subjective Task Structure (FSTS) model is described and extended so that the information essential to the agent coordination is effectively and explicitly modeled and incorporated into a general reinforcement learning structure. When compared with other learning based coordination approaches, we argue that due to the explicit modeling and exploitation of the interdependencies among agents, our approach is more efficient and effective, thus widely applicable.