Abstract. We present a large-scale Neuromorphic model based on integrateand-fire (IF) neurons that analyses objects and their depth within a moving visual scene. A feature-based algorithm builds a luminosity receptor field as an artificial retina, in which the IF neurons act both as photoreceptors and processing units. We show that the IF neurons can trace an object's path and depth using an adaptive time-window and Temporally Asymmetric Hebbian (TAH) training.
Zhijun Yang, Alan F. Murray