This paper presents a survey and an analysis of the XQuery benchmark publicly available in 2006 -- XMach-1, XMark, X007, the Michigan benchmark, and XBench -- from different perspectives. We address three simple questions about these benchmarks: How are they used? What do they measure? What can one learn from using them? One focus of our analysis is to determine whether the benchmarks can be used for micro-benchmarking. Our conclusions are based on an usage analysis, on an in-depth analysis of the benchmark queries, and on experiments run on four XQuery engines: Galax, SaxonB, Qizx/Open, and MonetDB/XQuery. Key words: XQuery, Benchmarks, Micro-benchmarks