Recent algorithms for exemplar-based single image super-resolution have shown impressive results, mainly due to well-chosen priors and recently also due to more accurate blur kernels. Some methods exploit clustering of patches, local gradients or some context information. However, to the best of our knowledge, there is no literature studying the benefits of using semantic information at the image level. By semantic information we mean image segments with corresponding categorical labels. In this paper we investigate the use of semantic information in conjunction with A+, a state-of-the-art super-resolution method. We conduct experiments on large standard datasets of natural images with semantic annotations, and discuss the benefits vs. the drawbacks of using semantic information. Experimental results show that our semantic driven super-resolution can significantly improve over the original settings.