Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called contexts, as additional input. Angluin, Gasarch, and Smith, as well as Kinber, Smith, Velauthapillai, and Wiehagen have provided mathematical justification for this phenomenon in the inductive inference framework. However, their proofs rely heavily on self-referential coding tricks, that is, they directly code the solution of the learning problem into the context. Fulk has shown that for the Ex- and Bc-anomaly hierarchies, such results, which rely on self-referential coding tricks, do not hold robustly. In this work we analyze robust versions of learning aided by context and show that -- in contrast to Fulk's result above -- context also aids learning robustly. Also, studied is the difficulty of the functional dependence between the intended target tasks and useful associated contex...