In our research we study rational agents which learn how to choose the best conditional, partial plan in any situation. The agent uses an incomplete symbolic inference engine, employing Active Logic, to reason about consequences of performing actions — including information-providing ones. A simple planner creates conditional partial plans, which do not necessarily lead all the way to the goal. Finally, a learning module — based on Inductive Logic Programming mechanisms — supplies knowledge on how to choose which of those plans ought to be executed. We present some results of learning how to distinguish “bad” plans early in the reasoning process, before too many resources are wasted on considering them. We show that non-trivial transformations of the agent’s knowledge are needed before learning can be successful, but argue that learning can greatly improve the agent’s performance. KEY WORDS Intelligent Agents, Machine Learning, Logic Programming, Planning under Uncertain...