Because more output data must be created than is available from the input, magnification is an ill-posed problem. Traditional magnification relies on resampling an interpolation model at the appropriate rate; unfortunately, this simple solution is blind to the presence of the analog filter that was implicitly present when the samples of the function to be magnified were acquired. Consistent resampling has been introduced to take this into account, but it turns out that this solution is still under-constrained. In this paper, we propose regularization as a way to devise a deterministic magnification method that fully satisfies consistency constraints in the absence of noise, and at the same time that produces an output that best fulfills a wide class of criteria for regularity. Contrarily to many other methods, ours has been designed without ever leaving the continuous domain. We conduct experiments that show the benefit of our approach.