Abstract— The paper proposes a dynamic programming algorithm for training of functional networks. The algorithm considers each node as a state. The problem is formulated as finding the sequence of states which minimizes the sum of the squared errors approximation. Each node is optimized with regard to its corresponding neural functions and its estimated neuron functions. The dynamic programming algorithm tries to find the best path from the final layer nodes to the input layer which minimizes an optimization criterion. Finally, in the pruning stage, the unused nodes are deleted. The output layer can be taken as a summation node using some linearly independent families, such as, polynomial, exponential, Fourier,...etc. The algorithm is demonstrated by two examples and compared with other common algorithms in both computer science and statistics communities.
Emad A. El-Sebakhy, Salahadin Mohammed, Moustafa E