In this paper we analyze a quasi-Monte Carlo method for solving systems of linear algebraic equations. It is well known that the convergence of Monte Carlo methods for numerical integration can often be improved by replacing pseudorandom numbers with more uniformly distributed numbers known as quasirandom numbers. Here the convergence of a Monte Carlo method for solving systems of linear algebraic equations is studied when quasirandom sequences are used. An error bound is established and numerical experiments with large sparse matrices are performed using Sobo´l, Halton and Faure sequences. The results indicate that an improvement in both the magnitude of the error and the convergence rate are achieved.