This paper focuses on inductive learning of recursive logical theories from a set of examples. This is a complex task where the learning of one predicate definition should be interleaved with the learning of the other ones in order to discover predicate dependencies. To overcome this problem we propose a variant of the separate-and-conquer strategy based on parallel learning of different predicate definitions. In order to improve its efficiency, optimization techniques are investigated and adopted solutions are described. In particular, two caching strategies have been implemented and tested on document processing datasets. Experimental results are discussed and conclusions are drawn.