Argumentation theory has become an important topic in the field of AI. The basic idea is to construct arguments in favor and against a statement, to select the “acceptable” ones and, finally, to determine whether the original statement can be accepted or not. Several argumentation systems have been proposed in the literature. Some of them, the so-called rule-based systems, use a particular logical language with strict and defeasible rules. While these systems are useful in different domains (e.g. legal reasoning), they unfortunately lead to very unintuitive results, as is discussed in this paper. In order to avoid such anomalies, in this paper we are interested in defining principles, called rationality postulates, that can be used to judge the quality of a rule-based argumentation system. In particular, we define two important rationality postulates that should be satisfied: the consistency and the closure of the results returned by that system. We then provide a relatively...