Recent years have seen a significant increase in our understanding of high-dimensional nearest neighbor search (NNS) for distances like the 1 and 2 norms. By contrast, our understanding of the norm is now where it was (exactly) 10 years ago. In FOCS'98, Indyk proved the following unorthodox result: there is a data structure (in fact, a decision tree) of size O(n ), for any > 1, which achieves approximation O(log log d) for NNS in the d-dimensional metric. In this paper, we provide results that indicate that Indyk's unconventional bound might in fact be optimal. Specifically, we show a lower bound for the asymmetric communication complexity of NNS under , which proves that this space/approximation trade-off is optimal for decision trees and for data structures with constant cell-probe complexity.