— This paper shows that the distributed representation found in Learning Vector Quantization (LVQ) enables reinforcement learning methods to cope with a large decision search space, defined in terms of equivalence classes of input patterns like those found in the game of Go. In particular, this paper describes S[arsa]LVQ, a novel reinforcement learning algorithm and shows its feasibility for pattern-based inference in Go. As the distributed LVQ representation corresponds to a (quantized) codebook of compressed and generalized pattern templates, the state space requirements for online reinforcement methods are significantly reduced, thus decreasing the complexity of the decision space and consequently improving the play performance from pattern-based inference alone. A novel exploration strategy for reinforcement learning based on tabu search is introduced. Experimental results are shown against Minimax and Wally. Keywords– game playing, reinforcement learning