In this paper, we study a novel problem Collective Active Learning, in which we aim to select a batch set of "informative" instances from a networking data set to query the user in order to improve the accuracy of the learned classification model. We perform a theoretical investigation of the problem and present three criteria (i.e., minimum redundancy, maximum uncertainty and maximum impact) to quantify the informativeness of a set of selected instances. We define an objective function based on the three criteria and present an efficient algorithm to optimize the objective function with a bounded approximation rate. Experimental results on a real-world data sets demonstrate the effectiveness of our proposed approach. Categories and Subject Descriptors H.3.3 [Information Search and Retrieval]: Text Mining; I.2.6 [Artificial Intelligence]: Learning General Terms Algorithms, Experimentation Keywords collective active learning, link, document classification