We formulate and study a privacy guarantee to data owners, who share information with clients by publishing views of a proprietary database. The owner identifies the sensitive proprietary data using a secret query against the proprietary database. Given an extra view, the privacy guarantee ensures that potential attackers will not learn any information about the secret that could not already be obtained from the existing views. We define “learning” as the modification of the attacker’s a-priori probability distribution on the set of possible secrets. We assume arbitrary a-priori distributions (including distributions that correlate the existence of particular tuples) and solve the problem when secret and views are expressed as unions of conjunctive queries with non-equalities, under integrity constraints. We consider guarantees (a) for given view extents (b) for given domain of the secret and (c) independent of the domain and extents.