We present methods to answer two basic questions that arise when benchmarking optimization algorithms. The first one is: which algorithm is the `best' one? and the second one: which algorithm should I use for my real world problem? Both are connected and neither is easy to answer. We present methods which can be used to analyse the raw data of a benchmark experiment and derive some insight regarding the answers to these questions. We employ the presented methods to analyse the BBOB'09 benchmark results and present some initial findings.