Abstract. We consider the problem of efficient approximate learning by multilayered feedforward circuits subject to two objective functions. First, we consider the objective to maximize the ratio of correctly classified points compared to the training set size (e.g., see [3, 5]). We show that for single hidden layer threshold circuits with n hiddennodesand varying input dimension, approximation of this ratio within a relative error c/n3 , for some positive constant c, is NP-hard even if the number of examples is limited with respect to n. For architectures with two hidden nodes (e.g., as in [6]), approximating the objective within some fixed factor is NP-hard even if any sigmoid-like activation function in the hidden layer and ε-separation of the output [19] is considered, or if the semilinear activation function substitutes the threshold function. Next, we consider the objective to minimize the failure ratio [2]. We show that it is NP-hard to approximate the failure ratio within e...