Inspired by the recent improvements in domain adaptation and session variability compensation techniques used for speech and speaker processing, we study their effect for emotion prediction. More specifically, we investigated the use of publicly available out-of-domain data with emotion annotations for improving the performance of the in-domain model trained using 911 emergency-hotline calls. Following the emotion detection literature, we use prosodic (pitch, energy, and speaking rate) features as the inputs to a discriminative classifier. We performed segment-level n-fold cross validation emotion prediction experiments. Our results indicate significant improvement of performance for emotion prediction exploiting out-ofdomain data.