Current models for the learning of feature detectors work on two time scales: on a fast time scale the internal neurons' activations adapt to the current stimulus; on a slow time scale the weights adapt to the statistics of the set of stimuli. Here we explore the adaptation of a neuron's intrinsic excitability, termed intrinsic plasticity, which occurs on a separate time scale. Hereby, a neuron maintains homeostasis of an exponentially distributed firing rate in a dynamic environment. We exploit this in the context of a generative model to impose sparse coding. With natural image input, localized edge detectors emerge as models of V1 simple cells. An intermediate time scale for the intrinsic plasticity parameters allows to model aftereffects. In the tilt-aftereffect, after a viewer adapts to a grid of a certain orientation, grids of a nearby orientation will be perceived as tilted away from the adapted orientation. Our results show that adapting the neurons' gain- but n...