We propose an efficient forward regression algorithm based on greedy optimization of marginal likelihood. It can be understood as a forward selection procedure which adds a new basis vector at each step with the largest increment to the marginal likelihood. The computational cost of our algorithm is linear in the number n of training examples and quadratic in the number k of selected basis vectors, i.e. O(nk2 ). Moreover, our approach is only required to store a small fraction of all columns of the full design matrix. We compare our algorithm with the well-known Relevance Vector Machines (RVM) which also optimizes marginal likelihood iteratively. The results show that our algorithm can achieve comparable prediction accuracy but with significantly better scaling performance in terms of both computational cost and memory requirements.