Data Grids provide geographically distributed resources for large-scale data-intensive applications that generate large data sets. However, ensuring efficient access to such huge and widely distributed data is hindered by the high latencies of the Internet. We address these challenges by employing intelligent replication and caching of objects at strategic locations. In our approach, replication decisions are based on a cost-estimation model and driven by the estimation of the data access gains and the replica’s creation and maintenance costs. These costs are in turn based on factors such as runtime accumulated read/write statistics, network latency, bandwidth, and replica size. To support large numbers of users who continuously change their data and processing needs, we introduce scalable replica distribution topologies that adapt replica placement to meet these needs. In this paper we present the design of our dynamic memory middleware and replication algorithm. To evaluate the p...
Houda Lamehamedi, Zujun Shentu, Boleslaw K. Szyman