Natural scene categorization of images represents a very useful task for automatic image analysis systems in a wide variety of applications. In the literature, several methods have been proposed facing this issue with excellent results. Typically, features of several types are clustered so as to generate a vocabulary able to efficiently represent the considered image collection. This vocabulary is formed by a discrete set of visual codewords whose co-occurrence or composition allows to classify the scene category. A common drawback of these methods is that features are usually extracted from the whole image, actually disregarding whether they derive from the scene to be classified or other objects, independent from the scene, eventually present in it. As quoted by perceptual studies, features regarding objects present in an image are not useful to scene categorization, indeed bringing an important source of clutter, in dependence of their size. In this paper, a novel, multiscale, stat...