Consider an n-vertex graph G = (V, E) of maximum degree ∆, and suppose that each vertex v ∈ V hosts a processor. The processors are allowed to communicate only with their neighbors in G. The communication is synchronous, i.e., it proceeds in discrete rounds. In the distributed vertex coloring problem the objective is to color G with ∆ + 1, or slightly more than ∆ + 1, colors using as few rounds of communication as possible. (The number of rounds of communication will be henceforth referred to as running time.) Efficient randomized algorithms for this problem are known for more than twenty years [1, 19]. Specifically, these algorithms produce a (∆+1)-coloring within O(log n) time, with high probability. On the other hand, the best known deterministic algorithm that requires polylogarithmic time employs O(∆2 ) colors. This algorithm was devised in a seminal FOCS’87 paper by Linial [16]. Its running time is O(log∗ n). In the same paper Linial asked whether one can color ...