As ever-larger training sets for learning to rank are created, scalability of learning has become increasingly important to achieving continuing improvements in ranking accuracy [2]. Exploiting independence of“summation form”computations [3], we show how each iteration in ListNet [1] gradient descent can benefit from parallel execution. We seek to draw the attention of the IR community to use Spark [7], a newly introduced distributed cluster computing system, for reducing training time of iterative learning to rank algorithms. Unlike MapReduce [4], Spark is especially suited for iterative and interactive algorithms. Our results show near linear reduction in ListNet training time using Spark on Amazon EC2 clusters. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval] Keywords Learning to Rank, Distributed Computing