We investigate a form of modular neural network for classification with (a) pre-separated input vectors entering its specialist (expert) networks, (b) specialist networks which are selforganized (radial-basis function or self-targeted feedforward type) and (c) which fuses (or integrates) the specialists with a single-layer net. When the modular architecture is applied to spatiotemporal sequences, the Specialist Nets are recurrent; specifically, we use the Input Recurrent type. The Specialist Networks (SNs) learn to divide their input space into a number of equivalence classes defined by self-organized clustering and learning using the statistical properties of the input domain. Once the specialists have settled in their training, the Fusion Network is trained by any supervised method to map to the semantic classes. We discuss the fact that this architecture and its training is quite distinct from the hierarchical mixture of experts (HME) type as well as from stacked generalization. Be...
Sylvian R. Ray, William H. Hsu