In their pioneering work, Mukouchi and Arikawa modeled a learning situation in which the learner is expected to refute texts which are not representative of L, the class of languages being identified. Lange and Watson extended this model to consider justified refutation in which the learner is expected to refute texts only if it contains a finite sample unrepresentative of the class L. Both the above studies were in the context of indexed families of recursive languages. We extend this study in two directions. Firstly, we consider general classes of recursively enumerable languages. Secondly, we allow the machine to either identify or refute the unrepresentative texts (respectively, texts containing finite unrepresentative samples). We observe some surprising differences between our results and the results obtained for learning indexed families by Lange and Watson.