Max and min operations have interesting properties that facilitate the exchange of information between the symbolic and real-valued domains. As such, neural networks that employ max-min activation functions have been a subject of interest in recent years. Since max-min functions are not strictly differentiate, we propose a mathematically sound learning method based on using Fourier convergence analysis of side-derivatives to derive a gradient descent technique for max-min error functions. This method is applied to a "typical" fuzzy-neural network model employing max-rnin activation functions. We show how this network can be trained to perform function approximation; its performance was found to be better than that of a conventional feedforward neural network.