The generalization of policies in reinforcement learning is a main issue, both from the theoretical model point of view and for their applicability. However, generalizing from a set of examples or searching for regularities is a problem which has already been intensively studied in machine learning. Our work uses techniques in which generalizations are constrained by a language bias, in order to regroup similar states. Such generalizations are principally based on the properties of concept lattices. To guide the possible groupings of similar environment’s states, we propose a general algebraic framework, considering the generalization of policies through a set partition of the states and using a language bias as an a priori knowledge. We give an application as an example of our approach by proposing and experimenting a bottom-up algorithm.