This paper presents a method for learning the parameters of rhythmic walking to generate purposive humanoid motions. The controller consists of the two layers: rhythmic walking is realized by the lower layer, which adjusts the speed of the phase on the desired trajectory depending on sensory information, and the upper layer learns (1) the feasible parameter sets that enable stable walking, (2) the causal relationship between the walking parameters to be given to the lower-layer controller and the change in the sensory information, and (3) the feasible rhythmic walking parameters by reinforcement learning so that a robot can reach to the goal based on visual information. The experimental results show that a real humanoid learns to reach the ball and to shoot it into the goal in the context of the RoboCupSoccer competition, and the further issues are discussed.