We introduce the Tsallis divergence error measure in the context of pLSA matrix and tensor decompositions showing much improved performance in the presence of noise. The focus of our approach is on one hand to provide an optimization framework which extends (in the sense of a one parameter family) the Maximum Likelihood framework and on the other hand is theoretically guaranteed to provide robustness under clutter, noise and outliers in the measurement matrix under certain conditions. Specifically, the conditions under which our approach excels is when the measurement array (co-occurrences) is sparse -- which happens in the application domain of "bag of visual words".