Acting in a dynamic environment is a complex task that requires several issues to be investigated, with the aim of controlling the associated search complexity. In this paper, a life-cycle for implementing adaptive capabilities on intelligent agents is proposed, which integrates planning and learning within a hierarchical framework. The integration between planning and learning is promoted by an agent architecture explicitly for supporting abstraction. Planning is performed by adopting a hierarchical interleaved planning and execution approach. Learning is performed by exploiting a chunking technique on successful plans. A suitable feedforward neural network selects relevant sed to identify new abstract operators. Due to the dependency between abstract operators and alreadysolved planning problems, each agent is able to develop abstract layer, thus promoting an individual adaptation to the given environment.