We empirically compare five different publicly-available phrase sets in two large-scale (N = 225 and N = 150) crowdsourced text entry experiments. We also investigate the impact of asking participants to memorize phrases before writing them versus allowing participants to see the phrase during text entry. We find that asking participants to memorize phrases increases entry rates at the cost of slightly increased error rates. This holds for both a familiar and for an unfamiliar text entry method. We find statistically significant differences between some of the phrase sets in terms of both entry and error rates. Based on our data, we arrive at a set of recommendations for choosing suitable phrase sets for text entry evaluations. Author Keywords Text entry, phrase sets, crowdsourcing, keyboards ACM Classification Keywords H.5.2. Information Interfaces and Presentation: User Interfaces—Input devices and strategies General Terms Experimentation, Standardization