Past empirical work has shown that learning multiple related tasks from data simultaneously can be advantageous in terms of predictive performance relative to learning these tasks independently. In this paper we present an approach to multi?task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines (SVMs), that have been successfully used in the past for single?task learning. Our approach allows to model the relation between tasks in terms of a novel kernel function that uses a task?coupling parameter. We implement an instance of the proposed approach similar to SVMs and test it empirically using simulated as well as real data. The experimental results show that the proposed method performs better than existing multi?task learning methods and largely outperforms single?task learning using SVMs. Categories and Subject Descriptors I.2.6 [Artificial Intelligence]: Learning. General Terms Algorithms, Theory. ...