The field of information retrieval has witnessed over 50 years of research on retrieval methods for metadata descriptions and controlled indexing languages, the prototypical example being the library catalogue. It seems only natural to resort to additional data for improving book retrieval, such as the text of the book in whole or in part (table of contents, abstract) or contributed social data acquired through crowdsourcing social cataloguing sites like LibraryThing. Without denying the potential value of such additional data, we want to challenge the underlying assumption that applying novel retrieval methods to traditional book descriptions cannot improve book retrieval. Specifically, this paper investigates the effectiveness of author rankings in a library catalogue. We show that a standard retrieval model results in a book ranking that meets and exceeds the effectiveness of catalogue systems. We show that using expert finding methods we also can obtain effective author rank...