We study the problem of exploiting parallelism from search-based AI systems on distributed machines. We propose stack-splitting, a technique for implementing orparallelism, which when coupled with appropriate scheduling strategies leads to: (i) reduced communication during distributed execution; and, (ii) distribution of larger grainsized work to processors. The modified technique can also be implemented on shared memory machines and should be quite competitive with existing methods. Indeed, an implementation has been carried out on shared memory machines, and the results are reported here.