This paper presents a Bayesian framework for generating inverse-consistent inter-subject large deformation transformations between two multi-modal image sets of the brain. In this framework, the estimated transformations are generated using the maximal information about the underlying neuroanatomy present in each of the different modalities. This modality independent registration framework is achieved using the Bayesian paradigm and jointly estimating the posterior densities associated with the multi-modal image sets and the high-dimensional registration transformation mapping the two subjects. To maximally use the information present in all the modalities, Kullback-Leibler divergence between the estimated posteriors is minimized to estimate the registration. Registration results for two synthetic image sets of a human neuroanatomy are presented. Key Words: Multi-modal image registration, inverse consistent registration, information theory, medical image analysis, computational anatom...
Peter Lorenzen, Brad Davis, Sarang C. Joshi