Object class models trained on hundreds or thousands of
images have shown to enable robust detection. Transferring
knowledge from such models to new object classes trained
from a few or even as little as one training instance however
is still in its infancy. This paper designs a shape-based model
that allows to easily and explicitly transfer knowledge
on three different levels: transfer of individual parts’ shape
and appearance information, transfer of local symmetry between
parts, and transfer of part topology. Due to the factorized
form of the model, knowledge can either be transferred
for the complete model or just partial knowledge corresponding
to certain aspects of the model. The experiments
clearly demonstrate that the proposed model is competitive
with the state-of-the-art and enables both full and partial
knowledge transfer.