We present a system known as Med-LIFE (Medical application of Learning, Image Fusion, and Exploration) currently under development for medical image analysis. This pipelined system contains three processing stages that make possible multi-modality image fusion, learning-based segmentation, and exploration of these results. The fusion stage supports the combination of multi-modal medical images into a single, color image while preserving information present in the original, single-modality images. The learning stage allows experts to define the pattern recognition task by interactively training the system to recognize objects of interest. The exploration stage embeds the results of the previous stages within a 3D model of the patient's skull in order to provide spatial context while utilizing gesture recognition as a natural means of interaction.
Joshua R. New, Erion Hasanbelliu, Mario Aguilar