Sciweavers

1630 search results - page 7 / 326
» Coordinated Reinforcement Learning
Sort
View
ATAL
2007
Springer
14 years 4 months ago
Theoretical advantages of lenient Q-learners: an evolutionary game theoretic perspective
This paper presents the dynamics of multiple reinforcement learning agents from an Evolutionary Game Theoretic (EGT) perspective. We provide a Replicator Dynamics model for tradit...
Liviu Panait, Karl Tuyls
ICRA
2010
IEEE
162views Robotics» more  ICRA 2010»
13 years 8 months ago
Adaptive multi-robot coordination: A game-theoretic perspective
Multi-robot systems researchers have been investigating adaptive coordination methods for improving spatial coordination in teams. Such methods adapt the coordination method to th...
Gal A. Kaminka, Dan Erusalimchik, Sarit Kraus
AROBOTS
1998
111views more  AROBOTS 1998»
13 years 9 months ago
Emergence and Categorization of Coordinated Visual Behavior Through Embodied Interaction
This paper discusses the emergence of sensorimotor coordination for ESCHeR, a 4DOF redundant foveated robot-head, by interaction with its environment. A feedback-error-learning(FEL...
Luc Berthouze, Yasuo Kuniyoshi
ATAL
2007
Springer
14 years 4 months ago
Reducing the complexity of multiagent reinforcement learning
It is known that the complexity of the reinforcement learning algorithms, such as Q-learning, may be exponential in the number of environment’s states. It was shown, however, th...
Andriy Burkov, Brahim Chaib-draa
HPDC
2009
IEEE
14 years 1 months ago
Maestro: a self-organizing peer-to-peer dataflow framework using reinforcement learning
In this paper we describe Maestro, a dataflow computation framework for Ibis, our Java-based grid middleware. The novelty of Maestro is that it is a self-organizing peer-to-peer s...
C. van Reeuwijk