A recurrent question in the design of intelligent agents is how to assign degrees of beliefs, or subjective probabilities, to various events in a relational environment. In the standard knowledge representation approach, these probabilities are evaluated according to a knowledge base, such as a logical program or a Bayesian network. However, even for very restricted representation languages, the problem of evaluating probabilities from a knowledge base is computationally prohibitive. By contrast, this study embarks on the learning to reason (L2R) framework that aims at eliciting degrees of belief in an inductive manner. The agent is viewed as an anytime reasoner that iteratively improves its performance in light of the knowledge induced from its mistakes. By coupling exponentiated gradient strategies in online learning and weighted model counting techniques in reasoning, the L2R framework is shown to provide efficient solutions to relational probabilistic reasoning problems that are pr...