In this article a neural network architecture is presented that is able to build a soft segmentation of a two-dimensional input. This network architecture is applied to position evaluation in the game of Go. It is trained using self-play and temporal difference learning combined with a rich two-dimensional reinforcement signal. Two experiments are performed, one using the raw board position as input, the other one doing some simple preprocessing of the board. The second network is able to achieve playing strength comparable to a 13-kyu Go program.