We describe a real-time musical agent that generates an audio drum-track by concatenating audio segments automatically extracted from pre-existing musical files. The drum-track can be controlled in real-time by specifying high-level properties (or constraints) holding on metadata automatically extracted from the audio segments. A constraint-satisfaction mechanism, based on local search, selects audio segments that best match those constraints at any time. We report on several drum track audio descriptors designed for the system. We also describe a basic mecanism for controlling the tradeoff between the agent’s autonomy and reactivity, which we illustrate with experiments made in the context of a virtual duet between the system and a human pianist.