The goal of approximate policy evaluation is to “best” represent a target value function according to a specific criterion. Temporal difference methods and Bellman residual methods differ in the choice of the optimization criterion. So-called residual algorithms, which we refer to as hybrid algorithms, effectively combine these two solution methods. We propose two least-squares implementations of hybrid algorithms. This improves the previous incremental algorithm by making more efficient use of data. Furthermore, we provide a geometric interpretation of hybrid algorithms and demonstrate on a simple problem why a combination of temporal difference methods and Bellman residual methods may be useful. Experimental results in both small and large domains suggest hybrid algorithms can find solutions that lead to better policies when performing policy iteration.