A central issue in relational learning is the choice of an appropriate bias for limiting first-order induction. The purpose of this study is to circumvent this issue within a uniform framework inspired from the paradigm of windowing. A bias window is a restricted subclass of the relational space determined by some parameters. The idea is to learn a theory in a small window first, and iteratively adjust the window in order to find the optimal bias from which to choose the final theory. To this end, our model integrates a logical notion of window-based induction, a learning algorithm that implements this mechanism, and a windowing technique that monitors the learning process using a metric-based criterion. Experiments on the Mutagenesis dataset show that, after a period of underfitting, windowing converges on hypotheses which are stable and very effective.