This paper proposes novel hierarchical self-organizing associative memory architecture for machine learning. This memory architecture is characterized with sparse and local interconnections, self-organizing processing elements (PE), and probabilistic synaptic transmission. Each PE in the network dynamically estimates its output value from the observed input data distribution and remembers the statistical correlations between its inputs. Both feed forward and feedback signal propagation is used to transfer signals and make associations. Feed forward processing is used to discover relationships in the input patterns, while feedback processing is used to make associations and predict missing signal values. Classification and image recovery applications are used to demonstrate the effectiveness of the proposed memory for both heteroassociative and auto-associative learning.
Janusz A. Starzyk, Haibo He, Yue Li