Repair or error-recovery strategies are an important design issue in Spoken Dialogue Systems (SDSs) - how to conduct the dialogue when there is no progress (e.g. due to repeated ASR errors). Nearly all current SDSs use hand-crafted repair rules, but a more robust approach is to use Reinforcement Learning (RL) for data-driven dialogue strategy learning. However, as well as usually being tested only in simulation, current RL approaches use small state spaces which do not contain linguistically motivated features such as “Dialogue Acts” (DAs). We show that a strategy learned with DA features outperforms hand-crafted and slot-status strategies when tested with real users (+9% average task completion, p < 0.05). We then explore how using DAs produces better repair strategies e.g. focus-switching. We show that DAs are useful in deciding both when to use a repair strategy, and which one to use.