After an initial peak, the number of synapses in mammalian cerebral cortex decreases in the formative period and throughout adult life. However, if synapses are taken to reflect circuit complexity, the issue arises of how to reconcile pruning with the increasing complexity of the representations acquired in successive stages of development. Taking these two conflicting requirements as an architectural constraint, we show here that a simple topographic self-organization process can learn increasingly complex representations when some of its synapses are progressively pruned. By addressing the learning-theoretic properties of increasing complexity, the model indicates how pruning may be computationally advantageous. This suggests a novel interpretation of the interplay between biological and acquired patterns of neuronal activation determining topographic organization in the cortex.