We consider a framework for semi-supervised learning using spectral decomposition-based unsupervised kernel design. We relate this approach to previously proposed semi-supervised learning methods on graphs. We examine various theoretical properties of such methods. In particular, we present learning bounds and derive optimal kernel representation by minimizing the bound. Based on the theoretical analysis, we are able to demonstrate why spectral kernel design based methods can improve the predictive performance. Empirical examples are included to illustrate the main consequences of our analysis.