Abstract. We give dimension-free and data-dependent bounds for linear multi-task learning where a common linear operator is chosen to preprocess data for a vector of task speci...c linear-thresholding classi...ers. The complexity penalty of multi-task learning is bounded by a simple expression involving the margins of the task-speci...c classi...ers, the Hilbert-Schmidt norm of the selected preprocessor and the HilbertSchmidt norm of the covariance operator for the total mixture of all task distributions, or, alternatively, the Frobenius norm of the total Gramian matrix for the data-dependent version. The results can be compared to state-of-the-art results on linear single-task learning.