Large-scale data analysis poses both statistical and computational problems which need to be addressed simultaneously. A solution is often straightforward if the data are homogeneous: one can use classical ideas of subsampling and mean aggregation to get a computationally e cient solution with acceptable statistical accuracy, where the aggregation step simply averages the results obtained on distinct subsets of the data. However, if the data exhibit inhomogeneities (and typically they do), the same approach will be inadequate, as it will be unduly influenced by e↵ects that are not persistent across all the data due to, for example, outliers or time-varying e↵ects. We show that a tweak to the aggregation step can produce an estimator of e↵ects which are common to all data, and hence interesting for interpretation and often leading to better prediction than pooled e↵ects.