Sciweavers

POPL
2016
ACM

Learning invariants using decision trees and implication counterexamples

8 years 8 months ago
Learning invariants using decision trees and implication counterexamples
Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counter-examples that are based on scalable machine learning techniques. In particular, we extend decision tree learning algorithms, building new scalable and heuristic ways to construct small decision trees using statistical measures that account for implication counterexamples. We implement the learners and an appropriate teacher, and show that they are scalable, efficient and convergent in synthesizing adequate inductive invariants in a suite of more than 50 programs, and superior in performance and convergence than existing constraint-solver based learners.
Pranav Garg 0001, Daniel Neider, P. Madhusudan, Da
Added 09 Apr 2016
Updated 09 Apr 2016
Type Journal
Year 2016
Where POPL
Authors Pranav Garg 0001, Daniel Neider, P. Madhusudan, Dan Roth
Comments (0)