This paper introduces a novel kernel-based method for template tracking in video sequences. The method is derived for a general warping transformation, and its application to affine motion tracking is further explored. Our approach is based on maximization of the multi-kernel Bhattacharyya coefficient with respect to the warp parameters. We explicitly compute the gradient of the similarity functional, and use a quasi-Newton procedure for optimization. Additionally, we consider a simple extension of the method that employs an illumination model correction to allow tracking under varying lighting conditions. The resulting tracking procedure is evaluated on a number of examples including large templates tracking non-rigidly moving textured areas.