Abstract. In this paper we propose an algorithm based on fuzzy similarity which models the concept of resemblance between facial expressions of an Embodied Conversational Agent. In our approach slightly dierent expressions are described with one signicant label. The algorithm allows us to process these expressions and to establish the degree of visual resemblance. We also present the evaluation study in which we compared the users perception of similarity of facial expressions. Finally we describe an example of the application of this algorithm to generate complex facial expressions of an ECA.