Combining probability and first-order logic has been the subject of intensive research during the last ten years. The most well-known formalisms combining probability and some subset of first-order logic are probabilistic relational models (PRMs), Bayesian logic programs (BLPs) and Markov logic networks (MLNs). Of these three formalisms, MLN is the currently most actively researched. While the subset of firstorder logic used by Markov logic networks is more expressive than that of the other two formalisms, its probabilistic semantics is given by weights assigned to formulas, which limits the comprehensibility of MLNs. Based on a knowledge representation formalism developed for propositional probabilistic models, we propose an alternative way to specify Markov logic networks, which allows the specification of probabilities for the formulas of a MLN. This results in better comprehensibility, and might open the way for using background knowledge when learning MLNs or even for the use of ...