TD-FALCON (Temporal Difference - Fusion Architecture for Learning, COgnition, and Navigation) is a class of self-organizing neural networks that incorporates Temporal Difference (TD) methods for real-time reinforcement learning. In this paper, we present two strategies, i.e. policy sharing and neighboring-agent mechanism, to further improve the learning efficiency of TD-FALCON in complex multi-agent domains. Through experiments on a traffic control problem domain and the herding task, we demonstrate that those strategies enable TD-FALCON to remain functional and adaptable in complex multi-agent domains.