We prove the first non-trivial (super linear) lower bound in the noisy broadcast model, defined by El Gamal in [6]. In this model there are n + 1 processors P0, P1, . . . , Pn, each of which is initially given a private input bit xi. The goal is for P0 to learn the value of f(x1, . . . , xn), for some specified function f, using a series of noisy broadcasts. At each step a designated processor broadcasts one bit to all of the other processors, and the bit received by each processor is flipped with fixed probability (independently for each recipient). In 1988, Gallager [16] gave a noise-resistant protocol that allows P0 to learn the entire input with constant probability in O(n log log n) broadcasts. We prove that Gallager’s protocol is optimal, up to a constant factor. Our lower bound follows by reduction from a lower bound for generalized noisy decision trees, a new model which may be of independent interest. For this new model we show a lower bound of Ω(n log n) on the dept...
Navin Goyal, Guy Kindler, Michael E. Saks