Convolution tree kernel has shown promising results in semantic role labeling (SRL). However, this kernel does not consider much linguistic knowledge in kernel design and only performs hard matching between sub-trees. To overcome these constraints, this paper proposes a grammar-driven convolution tree kernel for SRL by introducing more linguistic knowledge. Compared with the standard convolution tree kernel, the proposed grammar-driven kernel has two advantages: 1) grammardriven approximate substructure matching, and 2) grammardriven approximate tree node matching. The two approximate matching mechanisms enable the proposed kernel to better explore linguistically motivated structured knowledge. Experiments on the CoNLL-2005 SRL shared task and the PropBank I corpus show that the proposed kernel outperforms the standard convolution tree kernel significantly. Moreover, we present a composite kernel to integrate a feature-based polynomial kernel and the proposed grammar-driven convolution...