In this paper we apply state-of-the-art approach to object detection and localisation by incorporating local descriptors and their spatial configuration into a generative probability model. In contrast to the recent semisupervised methods we do not utilise interest point detectors, but apply a supervised approach where local image features (landmarks) are annotated in a training set and therefore their appearance and spatial variation can be learnt. Our method enables working in purely probabilistic search spaces providing a MAP estimate of object location, and in contrast to the recent methods, no background class needs to be formed. Using the training set we can estimate pdfs for both spatial constellation and local feature appearance. By applying an inference bias that the largest pdf mode has probability one, we are able to combine prior information (spatial configuration of the features) and observations (image feature appearance) into posterior distribution which can be generati...