The Bayesian framework of learning from positive noise-free examples derived by Muggleton [12] is extended to learning functional hypotheses from positive examples containing normally distributed noise in the outputs. The method subsumes a type of distance based learning as a special case. We also present an effective method of outlieridentification which may significantly improve the predictive accuracy of the final multi-clause hypothesis if it is constructed by a clause-by-clause covering algorithm as e.g. in Progol or Aleph. Our method is implemented in Aleph and tested on two experiments, one of which concerns numeric functions while the other treats non-numeric discrete data where the normal distribution is taken as an approximation of the discrete distribution of noise.