While Bayesian network (BN) can achieve accurate predictions even with erroneous or incomplete evidence, explaining the inferences remains a challenge. Existing approaches fall short because they do not exploit variable interactions and cannot account for compensations during inferences. This paper proposes the Explaining BN Inferences (EBI) procedure for explaining how variables interact to reach conclusions. EBI explains the value of a target node in terms of the influential nodes in the target's Markov blanket under specific contexts, where the Markov nodes include the target's parents, children, and the children's other parents. Working back from the target node, EBI shows the derivation of each intermediate variable, and finally explains how missing and erroneous evidence values are compensated. We validated EBI on a variety of problem domains, including mushroom classification, water purification and web page recommendation. The experiments show that EBI generates...