A two-phase procedure, based on biosignal recordings, is applied in an attempt to classify the emotion valence content in human-agent interactions. In the first phase, participants are exposed to a sample of pictures with known valence values (taken from IAPS) and classifiers are trained on the physiological data recorded. During the second phase, biosignals are recorded for each participant while interacting with an embodied conversational agent (ECA) and the classifiers trained in the first phase are applied. The results from the procedure are promising and are discussed in the paper together with the problems encountered and the suggestions for possible future improvement.