Machine Learning systems are often distinguished according to the kind of representation they use, which can be either propositional or first-order logic. The framework working with first-order logic as a representation language for both the learned theories and the observations is known as Inductive Logic Programming (ILP). It has been widely shown in the literature that ILP systems have limitations in dealing with large amounts of numerical information, that is however a peculiarity of most real-world application domains. In this work we present a strategy to handle such information in a relational learning incremental setting and its integration with classical symbolic approaches to theory revision. Experiments were carried out on a real-world domain and a comparison with a state-of-art system is reported.