In this paper a new approach for approximation problems involving only few input and output parameters is presented and compared to traditional Backpropagation Neural Networks (BPNs). The basic model is a Tensor Product Bernstein Polynomial (TPBP) for which suitable control points need to be found. It is shown that a TPBP can also be interpreted as a special class of feed-forward neural networks where control point coordinates are represented by input weights. Although optimal control points for a TPBP leading to the smallest possible approximation errors can be determined by the Method of Least Squares (MLS), this approach has only poor generalization capabilities. Instead, the usage of a (
Günther R. Raidl, Gabriele Kodydek