Before it can achieve wide acceptance, parallelcomputation must be made significantlyeasier to program. One ain obstacles to this goal is the current usage of memory, both abstractly, by programmers, and concretely, by computer architects. In this paper, we present compiler technology for two novel computer architectures, and discuss how, on the one hand, many traditional, memory-based restraints on parallelism can be removed by the compiler — and, on the other hand, how computer architecture (along with appropriate compiler components) can provide a truly transparent virtual distributed memory in such a way so as to move both data-distribution and scheduling into the hardware domain, alleviating the programmer from these concerns.