Sciweavers

ILP
2001
Springer

Learning Functions from Imperfect Positive Data

14 years 3 months ago
Learning Functions from Imperfect Positive Data
The Bayesian framework of learning from positive noise-free examples derived by Muggleton [12] is extended to learning functional hypotheses from positive examples containing normally distributed noise in the outputs. The method subsumes a type of distance based learning as a special case. We also present an effective method of outlieridentification which may significantly improve the predictive accuracy of the final multi-clause hypothesis if it is constructed by a clause-by-clause covering algorithm as e.g. in Progol or Aleph. Our method is implemented in Aleph and tested on two experiments, one of which concerns numeric functions while the other treats non-numeric discrete data where the normal distribution is taken as an approximation of the discrete distribution of noise.
Filip Zelezný
Added 30 Jul 2010
Updated 30 Jul 2010
Type Conference
Year 2001
Where ILP
Authors Filip Zelezný
Comments (0)