We present a biologically plausible computational model for solving the visual feature binding problem. The binding problem appears to be due to the distributed nature of visual processing in the primate brain, and the gradual loss of spatial information along the processing hierarchy. The model relies on the reentrant connections so ubiquitous in the primate brain to recover spatial information, and thus allows features represented in different parts of the brain to be integrated in a unitary conscious percept. We demonstrate the ability of the Selective Tuning model of visual attention [1] to recover spatial information, and based on this we propose a general solution to the feature binding problem. The solution is used to simulate the results of a recent neurophysiology study on the binding of motion and color. The example demonstrates how the method is able to handle the difficult case of transparency.
Albert L. Rothenstein, John K. Tsotsos