We consider two basic computational problems regarding discrete probability distributions: (1) approximating the statistical difference (aka variation distance) between two given distributions, and (2) approximating the entropy of a given distribution. Both problems are considered in two different settings. In the first setting the approximation algorithm is only given samples from the distributions in question, whereas in the second setting the algorithm is given the “code” of a sampling device (for the distributions in question). We survey the know results regarding both settings, noting that they are fundamentally different: The first setting is concerned with the number of samples required for determining the quantity in question, and is thus essentially information theoretic. In the second setting the quantities in question are determined by the input, and the question is merely one of computational complexity. The focus of this survey is actually on the latter setting. In...
Oded Goldreich, Salil P. Vadhan