This work is at the intersection of two lines of research. One line, initiated by Dinur and Nissim, investigates the price, in accuracy, of protecting privacy in a statistical database. The second, growing from an extensive literature on compressed sensing (see in particular the work of Donoho and collaborators [14, 7, 13, 11]) and explicitly connected to errorcorrecting codes by Cand`es and Tao ([4]; see also [5, 3]), is in the use of linear programming for error correction. Our principal result is the discovery of a sharp threshhold 0.239, so that if < and A is a random m ? n encoding matrix of independently chosen standard Gaussians, where m = O(n), then with overwhelming probability over choice of A, for all x Rn , LP decoding corrects m arbitrary errors in the encoding Ax, while decoding can be made to fail if the error rate exceeds . Our bound resolves an open question of Cand`es, Rudelson, Tao, and Vershyin [3] and (oddly, but explicably) refutes empirical conclusions of ...