We address the problem of pronunciation variation in conversational speech with a context-dependent articulatory featurebased model. The model is an extension of previous work using dynamic Bayesian networks, which allow for easy factorization of a state into multiple variables representing the articulatory features. We build context-dependent decision trees for the articulatory feature distributions, which are incorporated into the dynamic Bayesian networks, and experiment with different sets of context variables. We evaluate our models on a lexical access task using a phonetically transcribed subset of the Switchboard corpus. We find that our models outperform a context-dependent phonetic baseline.