Kernel machines rely on an implicit mapping of the data such that non-linear classification in the original space corresponds to linear classification in the new space. As kernel machines are difficult to scale to large training sets, it has been proposed to perform an explicit mapping of the data and to learn directly linear classifiers in the new space. In this paper, we consider the problem of learning image categorizers on large image sets (e.g. > 100k images) using bag-of-visual-words (BOV) image representations and Support Vector Machine classifiers. We experiment with three approaches to BOV embedding: 1) kernel PCA (kPCA) [15], 2) a modified kPCA we propose for additive kernels and 3) random projections for shift-invariant kernels [14]. We report experiments on 3 datasets: Caltech101, VOC07 and ImageNet. An important conclusion is that simply square-rooting BOV vectors