Abstract. This paper addresses the ongoing discussion on influencing factors of automatic essay scoring with latent semantic analysis (LSA). Throughout this paper, we contribute to this discussion by presenting evidence for the effects of the parameters text pre-processing, weighting, singular value dimensionality and type of similarity measure on the scoring results. We benchmark this effectiveness by comparing the machine assigned with human assigned scores in a real world case. The paper shows, that each of the identified factors significantly influences the quality of automated essay scoring, but the factors are not to be independent of each other.