To approximate convolutions which occur in evolution equations with memory terms, a variable-stepsize algorithm is presented for which advancing N steps requires only O(N log N) operations and O(log N) active memory, in place of O(N2) operations and O(N) memory for a direct implementation. A basic feature of the fast algorithm is the reduction, via contour integral representations, to differential equations which are solved numerically with adaptive step sizes. Rather than the kernel itself, its Laplace transform is used in the algorithm. The algorithm is illustrated on three examples: a blow-up example originating from a Schr