Although very widely used in unsupervised data mining, most clustering methods are affected by the instability of the resulting clusters w.r.t. the initialization of the algorithm (as e.g. in k-means). Here we show that this problem can be elegantly and efficiently tackled by meta-clustering the clusters produced in several different runs of the algorithm, especially if “soft” clustering algorithms (such as Nonnegative Matrix Factorization) are used both at the object- and the meta-level. The essential difference w.r.t. other metaclustering approaches consists in the fact that our algorithm detects frequently occurring sub-clusters (rather than complete clusters) in the various runs, which allows it to outperform existing algorithms. Additionally, we show how to perform two-way meta-clustering, i.e. take both object and sample dimensions of clusters simultaneously into account, a feature which is essential e.g. for biclustering gene expression data, but has not been considered befo...