The 2010 NIST Speaker Recognition Evaluation (SRE10) included a test of Human Assisted Speaker Recognition (HASR) in which systems based in whole or in part on human expertise were evaluated on limited sets of trials. Participation in HASR was optional, and sites could participate in it without participating in the main evaluation of fully automatic systems. Two HASR trial sets were offered, with HASR1 including 15 trials, and HASR2 a superset of 150 trials. Results were submitted for 20 systems from 15 sites from 6 countries. The trial sets were carefully selected, by a process that combined automatic processing and human listening, to include particularly challenging trials. The performance results suggest that the chosen trials were indeed difficult, and the HASR systems did not appear to perform as well as the best fully automatic systems on these trials.
Craig S. Greenberg, Alvin F. Martin, George R. Dod