Typical conversational recommender systems support interactive strategies that are hard-coded in advance and followed rigidly during a recommendation session. In fact, Reinforcement Learning techniques can be used in order to autonomously learn an optimal (user-adaptive) strategy, basically by exploiting some information encoded as features of a state representation. In this regard, it is important to determine the set of relevant state features for a given recommendation task. In this paper, we address the issue of feature relevancy, and determine the relevancy of adding four different features to a baseline representation. We show that adding a feature might not always be beneficial, and that the relevancy could be influenced by the user behavior. The results motivate the application of our approach online, in order to acquire the right mixture of online user behavior for addressing the relevancy problem. 1 Background and Motivation Recommender systems [Resnick and Varian, 1997] a...