In typical multiagent teamwork settings, the teammates are either programmed together, or are otherwise provided with standard communication languages and coordination protocols. In contrast, this paper presents an ad hoc team setting in which the teammates are not pre-coordinated, yet still must work together in order to achieve their common goal(s). We represent a specific instance of this scenario, in which a teammate has limited action capabilities and a fixed and known behavior, as a finite-horizon, cooperative karmed bandit. In addition to motivating and studying this novel ad hoc teamwork scenario, the paper contributes to the k-armed bandits literature by characterizing the conditions under which certain actions are potentially optimal, and by presenting a polynomial dynamic programming algorithm that solves for the optimal action when the arm payoffs come from a discrete distribution. Categories and Subject Descriptors I.2.11 [Artificial Intelligence]: Distributed Artificial ...