We propose novel algorithms for organizing large image and video datasets using both the visual content and the associated sideinformation, such as time, location, authorship, and so on. Earlier research have used side-information as pre-filter before visual analysis is performed, and we design a machine learning algorithm to model the join statistics of the content and the side information. Our algorithm, Diverse-Density Contextual Clustering (D2C2), starts by finding unique patterns for each sub-collection sharing the same side-info, e.g., scenes from winter. It then finds the common patterns that are shared among all subsets, e.g., persistent scenes across all seasons. These unique and common prototypes are found with Multiple Instance Learning and subsequent clustering steps. We evaluate D2C2 on two web photo collections from Flickr and one news video collection from TRECVID. Results show that not only the visual patterns found by D2C2 are intuitively salient across different s...