The development of a speech translation (ST) system is costly, largely because it is expensive to collect parallel data. A new language pair is typically only considered in the aftermath of an international crisis that incurs a major need of crosslingual communication. Urgency justifies the deployment of interpreters while data is being collected. In recent work, we have shown that audio recordings of interpreter-mediated communication can present a low-cost data resource for the rapid development of automatic text and speech translation. However, our previous experiments remain limited to English/Spanish simultaneous interpretation. In this work, we examine our approaches for exploiting interpretation audio as translation model training data in the context of English/Pashto consecutive interpretation. We show that our previously made findings remain valid, despite the more complex language pair and the additional challenges introduced by the strong resource-limitations of Pashto.