We propose a simple two-level hierarchical probability model for unsupervised word segmentation. By treating words as strings composed of morphemes/phonemes which are themselves composed of character/phone strings, we use EM to first identify the important morphemes/phonemes in a corpus, and then use a second level of EM to identify words given a lower level morpheme/phoneme segmentation. To further improve performance of the basic method we employ a mutual information criterion to eliminate long word agglomerations and reduce the size of the inferred lexicon while moving EM out of poor local maxima. Experiments on the Brown corpus show that our method accurately recovers hidden word boundaries using less training data than current MDL based approaches, even though our method is only trained on raw unsupervised data.