Suppose we have n algorithms, quantum or classical, each computing some bit-value with bounded error probability. We describe a quantum algorithm that uses O( √ n) repetitions of the base algorithms and with high probability finds the index of a 1-bit among these n bits (if there is such an index). This shows that it is not necessary to first significantly reduce the error probability in the base algorithms to O(1/poly(n)) (which would require O( √ n log n) repetitions in total). Our technique is a recursive interleaving of amplitude amplification and error-reduction, and may be of more general interest. Essentially, it shows that quantum amplitude amplification can be made to work also with a bounded-error verifier. As a corollary we obtain optimal quantum upper bounds of O( √ N) queries for all constant-depth AND-OR trees on N variables, improving upon earlier upper bounds of O( √ Npolylog(N)).