Loop Calculus, introduced by Chertkov and Chernyak, is a new technique to incrementally improve approximations computed by Loopy Belief Propagation (LBP), with the ability to eventually make them exact. In this extended abstract, we give a brief overview of this technique, and show its relevance to the AI community. We consider the problem of Boolean Satisfiability (SAT) and use LBP with Loop Calculus corrections to perform probabilistic inference about the problem. In this preliminary work, we focus on identifying the main issues encountered when applying Loop Calculus, and include initial empirical results in the SAT domain.