Sciweavers

21 search results - page 1 / 5
» Parallel Web Spiders for Cooperative Information Gathering
Sort
View
GCC
2005
Springer
14 years 1 months ago
Parallel Web Spiders for Cooperative Information Gathering
Web spider is a widely used approach to obtain information for search engines. As the size of the Web grows, it becomes a natural choice to parallelize the spider’s crawling proc...
Jiewen Luo, Zhongzhi Shi, Maoguang Wang, Wei Wang
CIKM
2009
Springer
14 years 2 months ago
SPIDER: a system for scalable, parallel / distributed evaluation of large-scale RDF data
RDF is a data model for representing labeled directed graphs, and it is used as an important building block of semantic web. Due to its flexibility and applicability, RDF has bee...
Hyunsik Choi, Jihoon Son, YongHyun Cho, Min Kyoung...
DSS
2002
139views more  DSS 2002»
13 years 7 months ago
CI Spider: a tool for competitive intelligence on the Web
Competitive Intelligence (CI) aims to monitor a firm's external environment for information relevant to its decision-making process. As an excellent information source, the I...
Hsinchun Chen, Michael Chau, Daniel Dajun Zeng
HICSS
1999
IEEE
178views Biometrics» more  HICSS 1999»
13 years 11 months ago
Collaborative Web Crawling: Information Gathering/Processing over Internet
The main objective of the IBM Grand Central Station (GCS) is to gather information of virtually any type of formats (text, data, image, graphics, audio, video) from the cyberspace...
Shang-Hua Teng, Qi Lu, Matthias Eichstaedt, Daniel...
IR
2000
13 years 7 months ago
Automating the Construction of Internet Portals with Machine Learning
Domain-specific internet portals are growing in popularity because they gather content from the Web and organize it for easy access, retrieval and search. For example, www.campsear...
Andrew McCallum, Kamal Nigam, Jason Rennie, Kristi...