Sciweavers

ECAI
2008
Springer

Online Rule Learning via Weighted Model Counting

14 years 2 months ago
Online Rule Learning via Weighted Model Counting
Online multiplicative weight-update learning algorithms, such as Winnow, have proven to behave remarkably for learning simple disjunctions with few relevant attributes. The aim of this paper is to extend the Winnow algorithm to more expressive concepts characterized by DNF formulas with few relevant rules. For such problems, the convergence of Winnow is still fast, since the number of mistakes increases only linearly with the number of attributes. Yet, the learner is confronted with an important computational barrier: during any prediction, it must evaluate the weighted sum of an exponential number of rules. To circumvent this issue, we convert the prediction problem into a Weighted Model Counting problem. The resulting algorithm, SharpNow, is an exact simulation of Winnow equipped with backtracking, caching, and decomposition techniques. Experiments on static and drifting problems demonstrate the performance of the algorithm in terms of accuracy and speed.
Frédéric Koriche
Added 19 Oct 2010
Updated 19 Oct 2010
Type Conference
Year 2008
Where ECAI
Authors Frédéric Koriche
Comments (0)