We describe here a framework dedicated to studies and experimentations upon the nature of the relationships between the rational reasoning process of an artificial agent and its psychological counterpart, namely its behavioral reasoning process. This study is focused on the domain of Assistant Conversational Agents which are software tools providing various kinds of assistance to people of the general public interacting with computer based applications or services. In this context, we show on some examples how the agents must exhibit both rational reasoning about the system functioning and a human-like believable dialogical interaction with the users.