—We present a method for segmenting music with different grid levels in order to properly quantize note values in the transcription of music. This method can be used in automatic music transcription systems and music information retrieval systems to reduce a performance of a music piece to the printed or digital score. The system will take only the onset data of performed music from either MIDI or audio, and determine the best maximal grid level onto which to fit the note onsets. This maximal grid level, or tatum, is allowed to vary from section to section in a piece. We obtain the optimal segmentation of the piece using dynamic programming. We present results from an audio based performance of Milhaud’s Botafogo, as well as several MIDI performances of the Rondo-Allegro from Beethoven’s Pathetique. The results show a reduction of error compared to quantization based only one global metric level, and promises to create rhythm transcriptions that are parsimonious and readable.