Abstract. Sigmoidal or radial transfer functions do not guarantee the best generalization nor fast learning of neural networks. Families of parameterized transfer functions provide flexible decision borders. Networks based on such transfer functions should be small and accurate. Several possibilities of using transfer functions of different types in neural models are discussed, including enhancement of input features, selection of functions from a fixed pool, optimization of parameters of general type of functions, regularization of large networks with heterogeneous nodes and constructive approaches. A new taxonomy of transfer functions is proposed, allowing for derivation of known and new functions by additive or multiplicative combination of activation and output functions.