Sciweavers

AAAI
2006
14 years 1 months ago
A Multi Agent Approach to Vision Based Robot Scavenging
This paper proposes a design for our entry into the 2006 AAAI Scavenger Hunt Competition and Robot Exhibition. We will be entering a scalable two agent system consisting of off-th...
Kamil Wnuk, Brian Fulkerson, Jeremi Sudol
AAAI
2006
14 years 1 months ago
Mixtures of Predictive Linear Gaussian Models for Nonlinear, Stochastic Dynamical Systems
The Predictive Linear Gaussian model (or PLG) improves upon traditional linear dynamical system models by using a predictive representation of state, which makes consistent parame...
David Wingate, Satinder P. Singh
AAAI
2006
14 years 1 months ago
Sample-Efficient Evolutionary Function Approximation for Reinforcement Learning
Reinforcement learning problems are commonly tackled with temporal difference methods, which attempt to estimate the agent's optimal value function. In most real-world proble...
Shimon Whiteson, Peter Stone
AAAI
2006
14 years 1 months ago
When Is Constrained Clustering Beneficial, and Why?
Several researchers have illustrated that constraints can improve the results of a variety of clustering algorithms. However, there can be a large variation in this improvement, e...
Kiri Wagstaff, Sugato Basu, Ian Davidson
AAAI
2006
14 years 1 months ago
Trust Representation and Aggregation in a Distributed Agent System
This paper considers a distributed system of software agents who cooperate in helping their users to find services, provided by different agents. The agents need to ensure that th...
Yonghong Wang, Munindar P. Singh
AAAI
2006
14 years 1 months ago
Evaluating Preference-based Search Tools: A Tale of Two Approaches
People frequently use the world-wide web to find their most preferred item among a large range of options. We call this task preference-based search. The most common tool for pref...
Paolo Viappiani, Boi Faltings, Pearl Pu
AAAI
2006
14 years 1 months ago
Compact, Convex Upper Bound Iteration for Approximate POMDP Planning
Partially observable Markov decision processes (POMDPs) are an intuitive and general way to model sequential decision making problems under uncertainty. Unfortunately, even approx...
Tao Wang, Pascal Poupart, Michael H. Bowling, Dale...