We present conditions under which verb phrases are elided based on a corpus of positive and negative examples. Factor that affect verb phrase ellipsis include: the distance between antecedent and ellipsis site, the syntactic relation between antecedent and ellipsis site, and the presence or absence of adjuncts. Building on these results, we examine where in the generation architecture a trainable algorithm for VP ellipsis should be located. We show that the best performance is achieved when the trainable module is located after the realizer and has access to surfaceoriented features (error rate of 7.5%).