The present paper deals with the averagecase complexity of various algorithms for learning univariate polynomials. For this purpose an appropriate framework is introduced. Based on it, the learnability of univariate polynomials evaluated over the natural numbers and of univariate polynomials defined over finite fields is analyzed. Our results are manifold. In the first case, convergence is measured not relative to the degree of a polynomial but with respect to a measure that takes the degree and the size of the coefficients into account. Then standard interpolation is proved not to be the best possible algorithm with respect to the average number of examples needed. In general, polynomials over finite fields are not uniquely specified by their input-outputbehavior. Thus, as a new form of data representation the remainders modulo other polynomials is proposed and the expected example complexity is analyzed for a rather rich class of probability distributions.