Q-learning, a most widely used reinforcement learning method, normally needs well-defined quantized state and action spaces to converge. This makes it difficult to be applied to real robot tasks because of poor performance of learned behavior and further a new problem of state space construction. We have proposed Continuous Valued Q-learning for real robot applications, which calculates contribution values to estimate a continuous action value in order to make motion smooth and effective [1]. This paper proposes an improvement of the previous work, which shows a better performance of desired behavior than the previous one, with roughly quantized state and action. To show the validity of the method, we applied the method to a vision-guided mobile robot of which task is to chase a ball.