The numerical solution of linear systems with certain tensor product structures is considered. Such structures arise, for example, from the finite element discretization of a linear PDE on a d-dimensional hypercube. Linear systems with tensor product structure can be regarded as linear matrix equations for d = 2 and appear to be their most natural extension for d > 2. A standard Krylov subspace method applied to such a linear system suffers from the curse of dimensionality and has a computational cost that grows exponentially with d. The key to breaking the curse is to note that the solution can often be very well approximated by a vector of low tensor rank. We propose and analyse a new class of methods, so called tensor Krylov subspace methods, which exploit this fact and attain a computational cost that grows linearly with d.