We briefly survey several privacy compromises in published datasets, some historical and some on paper. An inspection of these suggests that the problem lies with the nature of the privacy-motivated promises in question. These are typically syntactic, rather than semantic. They are also ad hoc, with insufficient argument that fulfilling these syntactic and ad hoc conditions yields anything like what most people would regard as privacy. We examine two comprehensive, or ad omnia, guarantees for privacy in statistical databases discussed in the literature, note that one is unachievable, and describe implementations of the other. In this note we survey a body of work, developed over the past five years, addressing the problem known variously as statistical disclosure control, inference control, privacy-preserving datamining, and private data analysis. Our principal motivating scenario is a statistical database. A statistic is a quantity computed from a sample. Suppose a trusted and trustwo...