We have recently proposed an EM-style algorithm to optimize log-linear models with hidden variables. In this paper, we use this algorithm to optimize a hidden conditional random field, i.e., a conditional random field with hidden variables. Similar to hidden Markov models, the alignments are the hidden variables in the examples considered. Here, EMstyle algorithms are iterative optimization algorithms which are guaranteed to improve the training criterion in each iteration - without the need for tuning step sizes, sophisticated update schemes or numerical line optimization (with hardly predictable complexity). This is a rather strong property which conventional gradient-based optimization algorithms do not have. We present experimental results for a grapheme-tophoneme conversion task and compare the convergence behavior of the EM-style algorithm with L-BFGS and Rprop.