: Until now the growing volume of heterogeneous and distributed information on the WWW makes increasingly difficult for the existing tools to retrieve relevant information. To improve the performance of these tools, we suggest to handle two aspects of the problem: One concerns a better representation and description of WWW pages, we introduce here a new concept of "WWW documents", and we describe them thanks to metadata. We'll use the Dublin Core semantics and the XML syntax to represent these metadata. We'll suggest how this concept can improve information retrieval on the WWW and reduce the network load generated by robots. Then, we describe a flexible architecture based on two kinds of robots : "generalists" and "specialists" that collect and organize these metadata, in order to localize the resources on the WWW. They will contribute to the overall auto-organizing information process by exchanging their indices.