We describe a system that successfully transfers value function knowledge across multiple subdomains of realtime strategy games in the context of multiagent reinforcement learning. First, we implement an assignment-based decomposition architecture, which decomposes the problem of coordinating multiple agents into the two levels of task assignment and task execution. Second, a hybrid model-based approach allows us to use simple deterministic action models while relying on sampling for the opponents' actions. Third, value functions based on parameterized relational templates enable transfer across sub-domains with different numbers of agents. Keywords-reinforcement learning; markov decision processes; assignment problem; coordination; transfer learning