We present a quantitative evaluation of one well-known word alignment algorithm, as well as an analysis of frequent errors in terms of this model's underlying assumptions. Despite error rates that range from 22% to 32%, we argue that this technology can be put to good use in certain automated aids for human translators. We support our contention by pointing to certain successful applications and outline ways in which text alignments below the sentence level would allow us to improve the performance of other translation support tools.