Much emphasis in multiagent reinforcement learning (MARL) research is placed on ensuring that MARL algorithms (eventually) converge to desirable equilibria. As in standard reinfor...
Under the robot model, we show that a robot needs Ω(n log d) bits of memory to perform exploration of digraphs with n nodes and maximum out-degree d. We then describe an algorith...
Abstract— The translation of user requirements to system constraints and parameters during an exploration exercise is a hard problem, especially in the context of large scale emb...
The exploration of large information spaces is a difficult task, especially if the user is not familiar with the terminology used to describe information. Conceptual models of a do...
Heiner Stuckenschmidt, Anita de Waard, Ravinder Bh...
— Recent work on robotic exploration and active sensing has examined a variety of information-theoretic approaches to efficient and convergent map construction. These involve mo...
Motion planning for robots with many degrees of freedom requires the exploration of an exponentially large configuration space. Single-query motion planners restrict exploration ...
Abstract— The mapping and localization problems have received considerable attention in robotics recently. The exploration problem that drives mapping has started to generate sim...
Recovering the architecture is the first step towards reengineering a software system. Many reverse engineering tools use top-down exploration as a way of providing a visual and ...
—Reinforcement learning is a framework in which an agent can learn behavior without knowledge on a task or an environment by exploration and exploitation. Striking a balance betw...
Zhengqiao Ji, Q. M. Jonathan Wu, Maher A. Sid-Ahme...
— Computationally efficient motion planning must avoid exhaustive exploration of configuration space. We argue that this can be accomplished most effectively by carefully balan...