— Traditional approaches to integrating knowledge into neural network are concerned mainly about supervised learning. This paper presents how a family of self-organizing neural models known as Fusion Architecture for Learning, COgnition and Navigation(FALCON) can incorporate a priori knowledge and perform knowledge refinement and expansion through reinforcement learning. Symbolic rules are formulated based on pre-existing know-how and inserted into FALCON as a priori knowledge. The availability of knowledge enables FALCON to start performing earlier in the initial learning trials. Through a temporal-difference (TD) learning method, the inserted rules can be refined and expanded according to the evaluative feedback signals received from the environment. Our experimental results based on a minefield navigation task have shown that FALCON is able to learn much faster and attain a higher level of performance earlier when inserted with the appropriate a priori knowledge.