Sparse representation has found applications in numerous domains and recent developments have been focused on the convex relaxation of the 0-norm minimization for sparse coding (i.e., the 1-norm minimization). Nevertheless, the time and space complexities of these algorithms remain significantly high for large-scale problems. As signals in most problems can be modeled by a small set of prototypes, we propose an algorithm that exploits this property and show that the 1-norm minimization problem can be reduced to a much smaller problem, thereby gaining significant speed-ups with much less memory requirements. Experimental results demonstrate that our algorithm is able to achieve double-digit gain in speed with much less memory requirement than the state-of-the-art algorithms.