Semantic video adaptation allows to transmit video content with different viewing quality, depending on the relevance of the content from the user's viewpoint. To this end, an automatic annotation subsystem must be employed that automatically detect relevant objects and events in the video stream. In this paper we present a composite framework that is made of an automatic annotation engine and a semanticsbased adaptation module. Three new different compression solutions are proposed that work at the object or event level. Their performance is compared according to a new measure that takes into account the user's satisfaction and the effects on it of the errors in the annotation module.