In this paper we evaluate the effect of the emotional state of a speaker when text-independent speaker identification is performed. The spectral features used for speaker recognition are the Mel-frequency cepstral coefficients, while for the training of the speaker models and testing the system the Gaussian Mixture Models are employed. The tests are performed on the Berlin emotional speech database which contains 10 different speakers recorded in different emotional situations: happy, angry, fear, bored, sad and neutral. The results show an important influence of the emotional state upon text-independent speaker identification. In the end we try to give a possible solution to this problem.