The recent proliferation of crowd computing initiatives on the web calls for smarter methodologies and tools to annotate, query and explore repositories. There is the need for scalable techniques able to return also approximate results with respect to a given query as a ranked set of promising alternatives. In this paper we concentrate on annotation and retrieval of software components, exploiting semantic tagging relying on Linked Open Data. We focus on DBpedia and propose a new hybrid methodology to rank resources exploiting: (i) the graphbased nature of the underlying RDF structure, (ii) context independent semantic relations in the graph and (iii) external information sources such as classical search engine results and social tagging systems. We compare our approach with other RDF similarity measures, proving the validity of our algorithm with an extensive evaluation involving real users.