A new algorithm for learning one-variable pattern languages is proposed and analyzed with respect to its average-case behavior. We consider the total learning time that takes into account all operations till an algorithm has converged to a correct hypothesis. For the expectation it is shown that for almost all meaningful distributions defining how the pattern variable is replaced by a string to generate random examples of the target pattern language this algorithm converges within a constant number of rounds with a total learning time that is linear in the pattern length. Thus, the algorithm is average-case optimal in a strong sense. Though one-variable pattern languages cannot be inferred finitely, our approach can also be considered as probabilistic finite learning with high confidence.