Sciweavers

AAAI
2015

Solving Games with Functional Regret Estimation

8 years 7 months ago
Solving Games with Functional Regret Estimation
We propose a novel online learning method for minimizing regret in large extensive-form games. The approach learns a function approximator online to estimate the regret for choosing a particular action. A noregret algorithm uses these estimates in place of the true regrets to define a sequence of policies. We prove the approach sound by providing a bound relating the quality of the function approximation and regret of the algorithm. A corollary being that the method is guaranteed to converge to a Nash equilibrium in selfplay so long as the regrets are ultimately realizable by the function approximator. Our technique can be understood as a principled generalization of existing work action in large games; in our work, both the abstraction as well as the equilibrium are learned during self-play. We demonstrate empirically the method achieves higher quality strategies than state-of-the-art ion techniques given the same resources.
Kevin Waugh, Dustin Morrill, James Andrew Bagnell,
Added 27 Mar 2016
Updated 27 Mar 2016
Type Journal
Year 2015
Where AAAI
Authors Kevin Waugh, Dustin Morrill, James Andrew Bagnell, Michael H. Bowling
Comments (0)