We consider the problem of predicting the surface pronunciations of a word in conversational speech, using a model of pronunciation variation based on articulatory features. We build context-dependent decision trees for both phone-based and feature-based models, and compare their perplexities on conversational data from the Switchboard Transcription Project. We find that a fully-factored model, with separate decision trees for each articulatory feature, does not perform well, but a feature-based model using a smaller number of "feature bundles" outperforms both the fully-factored model and a phonebased model. The articulatory feature-based decision trees are also much more robust to reductions in training data. We also analyze the usefulness of various context variables.