Coordinating agents in a complex environment is a hard problem, but it can become even harder when certain characteristics of the tasks, like the required number of agents, are unknown. In these settings, agents not only have to coordinate themselves on the different tasks, but they also have to learn how many agents are required for each task. To contribute to this problem, we present in this paper a selective perception reinforcement learning algorithm which enables agents to learn the required number of agents that should coordinate their efforts on a given task. Even though there are continuous variables in the task description, agents in our algorithm are able to learn their expected reward according to the task description and the number of agents. The results, obtained in the RoboCupRescue simulation environment, show an improvement in the agents overall performance.