In this paper we study when the disclosure of data mining results represents, per se, a threat to the anonymity of the individuals recorded in the analyzed database. The novelty of our approach is that we focus on an objective definition of privacy compliance of patterns without any reference to a preconceived knowledge of what is sensitive and what is not, on the basis of the rather intuitive and realistic constraint that the anonymity of individuals should be guaranteed. In particular, the problem addressed here arises from the possibility of inferring from the output of frequent itemset mining (i.e., a set of itemsets with support larger than a threshold σ), the existence of patterns with very low support (smaller than an anonymity threshold k)[3]. In the following we develop a simple methodology to block such inference opportunities by introducing distortion on the dangerous patterns.