This paper focuses on non-strict processing, optimization, and partial evaluation of MPI programs which use incremental data structures (ISs). We describe the design and implementation of Distributed IS Software Caches (D-ISSC), which take advantage of special, and temporal data localities while maintaining the capability of latency tolerance of the distributed IS memory system (D-IS). We show that D-IS and D-ISSC facilitate programming by relaxing synchronization issues. Our experimental evaluation indicates that the traffic in the interconnection network can be also significantly reduced by partial evaluation of local and remote memory accesses, and speedup of regular and irregular applications can be increased.