— In this paper, we propose a new supervised learning method whereby information is controlled by the associated cost in an intermediate layer, and in an output layer, errors between targets and outputs are minimized. In the intermediate layer, competition is realized by maximizing mutual information between input patterns and competitive units with Gaussian functions. The process of information maximization is controlled by changing a cost associated with information. Thus, we can flexibly control the process of information maximization and to obtain internal representations appropriate to given problems. The new method is considered to be a hybrid model similar to the counter-propagation model, in which a competitive layer is combined with an output layer. In addition, this is considered to be a new approach to radial-basis function networks in which the center of classes can be determined by using information maximization. We applied our method to an artificial data problem, the...