Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. This paper makes multiple contributions. First, we bound the penalty of cache interference for real-time tasks by providing accurate predictions of the data cache behavior across preemptions, including instruction cache and pipeline effects. For every task, we derive data cache reference patterns for all scalar and non-scalar references. We show that, when considering cache preemption, the critical instance does not occur upon simultaneous release of all tasks. Second, we develop analysis methods to calculate tight upper bounds on the number of possible preemption points for each job of a task and consider the wo...