Sciweavers

CIKM
2009
Springer

SPIDER: a system for scalable, parallel / distributed evaluation of large-scale RDF data

14 years 6 months ago
SPIDER: a system for scalable, parallel / distributed evaluation of large-scale RDF data
RDF is a data model for representing labeled directed graphs, and it is used as an important building block of semantic web. Due to its flexibility and applicability, RDF has been used in applications, such as semantic web, bioinformatics, and social networks. In these applications, large-scale graph datasets are very common. However, existing techniques are not effectively managing them. In this paper, we present a scalable, efficient query processing system for RDF data, named SPIDER, based on the well-known parallel/distributed computing framework, Hadoop. SPIDER consists of two major modules (1) the graph data loader, (2) the graph query processor. The loader analyzes and dissects the RDF data and places parts of data over multiple servers. The query processor parses the user query and distributes sub queries to cluster nodes. Also, the results of sub queries from multiple servers are gathered (and refined if necessary) and delivered to the user. Both modules utilize the MapRed...
Hyunsik Choi, Jihoon Son, YongHyun Cho, Min Kyoung
Added 26 May 2010
Updated 26 May 2010
Type Conference
Year 2009
Where CIKM
Authors Hyunsik Choi, Jihoon Son, YongHyun Cho, Min Kyoung Sung, Yon Dohn Chung
Comments (0)