Handwritten text is generally captured through two main modalities: off-line and on-line. Smart approaches to handwritten text recognition (HTR) may take advantage of both modalities if they are available. This is for instance the case in computer-assisted transcription of text images, where on-line text can be used to interactively correct errors made by a main off-line HTR system. We present here baseline results on the biMod-IAM-PRHLT corpus, which was recently compiled for experimentation with techniques aimed at solving the proposed multi-modal HTR problem, and is being used in one of the official ICPR-2010 contests.