Many computer vision problems can be formulated in
a Bayesian framework with Markov Random Field (MRF)
or Conditional Random Field (CRF) priors. Usually, the
model assumes that a full Maximum A Posteriori (MAP) estimation
will be performed for inference, which can be really
slow in practice. In this paper, we argue that through
appropriate training, a MRF/CRF model can be trained to
perform very well on a suboptimal inference algorithm. The
model is trained together with a fast inference algorithm
through an optimization of a loss function on a training set
containing pairs of input images and desired outputs. A
validation set can be used in this approach to estimate the
generalization performance of the trained system. We apply
the proposed method to an image denoising application,
training a Fields of Experts MRF together with a 1-4 iteration
gradient descent inference algorithm. Experimental
validation on unseen data shows that the proposed training
approach obtains...