We consider a two-player zero-sum game given by a Markov chain over a finite set of states K and a family of zero-sum matrix games (Gk)kK. The sequence of states follows the Markov chain. At the beginning of each stage, only player 1 is informed of the current state k, then the game Gk is played, the actions played are observed by both players and the play proceeds to the next stage. We call such a game a Markov chain game with lack of information on one side. This model generalizes the model, introduced by Aumann and Maschler in the sixties, of zero-sum repeated games with lack of information on one side (which corresponds to a constant Markov chain). We generalize the proof of Aumann and Maschler and, from the definition and the study of appropriate "non revealing" auxiliary games with infinitely many stages, show the existence of the uniform value. An important difference with Aumann and Maschler's model is that here, the notions for player 1 of using the information...