This paper presents a self-organizing cognitive architecture, known as TD-FALCON, that learns to function through its interaction with the environment. TD-FALCON learns the value functions of the state-action space estimated through a temporal difference (TD) method. The learned value functions are then used to determine the optimal actions based on an action selection policy. We present a specific instance of TD-FALCON based on an e-greedy action policy and a Q-learning value estimation formula. Experiments based on a minefield navigation task and a minefield pursuit task show that TD-FALCON systems are able to adapt and function well in a multi-agent environment without an explicit mechanism for collaboration.