Maximum entropy models are a common modeling technique, but prone to overfitting. We show that using an exponential distribution as a prior leads to bounded absolute discounting by a constant. We show that this prior is better motivated by the data than previous techniques such as a Gaussian prior, and often produces lower error rates. Exponential priors also lead to a simpler learning algorithm and to easier to understand behavior. Furthermore, exponential priors help explain the success of some previous smoothing techniques, and suggest simple variations that work better.