Learning-based superresolution (SR) are popular SR techniques that use application dependent priors to infer the missing details in low resolution images (LRIs). However, their performance still deteriorates quickly when the magnification factor is moderately large. This leads us to an important problem: "Do limits of learning-based SR algorithms exist?" In this paper, we attempt to shed some light on this problem when the SR algorithms are designed for general natural images (GNIs). We first define an expected risk for the SR algorithms that is based on the root mean squared error between the superresolved images and the ground truth images. Then utilizing the statistics of GNIs, we derive a closed form estimate of the lower bound of the expected risk. The lower bound can be computed by sampling real images. By computing the curve of the lower bound w.r.t. the magnification factor, we can estimate the limits of learning-based SR algorithms, at which the lower bound of expec...