Social annotation systems such as SparTag.us and del.icio.us have been designed to encourage individual reading and marking behaviors that, when shared, accumulate to build collective knowledge spaces. Prior work reported on the experimental design and performance effects observed in a controlled study of SparTag.us. Study participants working independently on a sensemaking task who had access to a set of expert annotations were compared against participants using SparTag.us without those annotations and participants using only office software for annotation support. A learning effect favored the participants exposed to expert annotations. In this paper, we analyze the behavioral data captured during the experiment and identify differences in the work process that can explain the performance effects reported previously.