This paper describes the participation of the LIA in the Human Assisted Speaker Recognition (HASR) task of the NIST-SRE 2010 evaluation campaign and its extension to a larger number of listeners.The human performance in such unfavorable conditions is analyzed in relation to the decision of a speaker recognition automatic system. Results of the perception test showed an important inter-trial variability (from 3% to 90% of correct answers for non-target trials) whereas there was no significant difference between the experienced and inexperienced listeners. Some complementarity between speaker verification system and human decisions was also found.