In this paper we initiatean investigationof generalizationsof the ProbablyApproximatelyCorrect (PAC) learningmodelthat attemptto significantlyweakenthe target functionassumptions.The ultimategoal in this directionis informallytermed agnostic learning, in which we make virtuallyno assumptionson the target function.The namederivesfromthe fact thatas designersof learningalgorithms,we giveup the belief that Nature (as representedby the target function)has a simpleor succinctexplanation. We give a number of positive and negativeresults that providean initial outlineof the possibilitiesfor agnosticlearning. Our results includehardnessresultsfor the mostobviousgeneralizationof the PAC modelto an agnosticsetting, an efficientand generalagnosticlearningmethodbasedon dynamicprogramming,relationshipsbetweenloss functionsfor agnosticlearning,and an algorithmfor a learningproblemthat involveshiddenvariables.
Michael J. Kearns, Robert E. Schapire, Linda Selli