Sciweavers

COLT
2008
Springer

More Efficient Internal-Regret-Minimizing Algorithms

14 years 2 months ago
More Efficient Internal-Regret-Minimizing Algorithms
Standard no-internal-regret (NIR) algorithms compute a fixed point of a matrix, and hence typically require O(n3 ) run time per round of learning, where n is the dimensionality of the matrix. The main contribution of this paper is a novel NIR algorithm, which is a simple and straightforward variant of a standard NIR algorithm. However, rather than compute a fixed point every round, our algorithm relies on power iteration to estimate a fixed point, and hence runs in O(n2 ) time per round. Nonetheless, it is not enough to look only at the per-round run time of an online learning algorithm. One must also consider the algorithm's convergence rate. It turns out that the convergence rate of the aforementioned algorithm is slower than desired. This observation motivates our second contribution, which is an analysis of a multithreaded NIR algorithm that trades-off between its run time per round of learning and its convergence rate.
Amy R. Greenwald, Zheng Li, Warren Schudy
Added 18 Oct 2010
Updated 18 Oct 2010
Type Conference
Year 2008
Where COLT
Authors Amy R. Greenwald, Zheng Li, Warren Schudy
Comments (0)