We propose a flexible summarization framework for teamsport videos, which is able to integrate both the knowledge about displayed content (e.g. level of interest, type of view, etc.), and the individual (narrative) preferences of the user. Our framework builds on the partition of the original video sequence into independent segments, and create local stories by considering multiple ways to render each segment. We discuss how to segment videos based on production principles, and design the benefit function to evaluate various local stories from a segment. Summarization by selection of local stories is regarded as a resource allocation problem, and Lagrangian relaxation is performed to find the optimum. We use a soccer video to validate our framework in our experiments.