Abstract— An ontology is a formal language adequately representing the knowledge used for reasoning in a specific environment. When contradictions arise and make ontologies inadequate, revision is currently a very difficult and time consuming task. We suggest the design of rational agents to assist scientists in ontology building through the removal of contradictions. These machines, in line with Angluin’s ”learning from different teachers” paradigm, learn to manage applications in place of users. Rational agents have some interesting cognitive faculties: a kind of identity, consciousness of their behaviour, dialectical control of logical contradictions in a learned theory respecting a given ontology and aptitude to propose ontology revision. In the paper, we present an experimental scientific game Eleusis+Nobel as a framework outlining this new approach, i.e., automated assistance to scientific discovery. We show that rational agents are generic enough to support the ontol...