Recent work on distributed RAM sharing has largely focused on leveraging low-latency networking technologies to optimize remote memory access. In contrast, we revisit the idea of RAM sharing on a commodity cluster with an emphasis on the prevalent Gigabit Ethernet technology. The main point of the paper is to present a practical solution—a distributed RAM disk (dRamDisk) with an adaptive read-ahead scheme—which demonstrates that spare RAM capacity can greatly benefit I/O-constrained applications. Specifically, our experiments show that sequential read/write operations can be sped up approximately 3.5 times relative to a commodity hard drive and that, for more random access patterns, such as the ones experienced on a server, the speedup can be much higher. Our experiments demonstrate that this speedup is approximately 90% of what is practically achievable for the tested system.
Vassil Roussev, Golden G. Richard III, Daniel Ting