This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state spac...
In this paper we analyze the problem of opinion pooling. We introduce a divergence minimization framework to solve the problem of standard opinion pooling. Our results show that v...
Ashutosh Garg, T. S. Jayram, Shivakumar Vaithyanat...
The majority of the work in the area of Markov decision processes has focused on expected values of rewards in the objective function and expected costs in the constraints. Althou...
Learning the parameters (conditional and marginal probabilities) from a data set is a common method of building a belief network. Consider the situation where we have known graph s...
In reasoning tasks involving logical formulas, high expressiveness is desirable, although it often leads to high computational complexity. We study a simple measure of expressiven...
We introduce a general formalism of production inference relations that posses both a standard monotonic semantics and a natural nonmonotonic semantics. The resulting nonmonotonic...
Bayesian Model Averaging (BMA) is well known for improving predictive accuracy by averaging inferences over all models in the model space. However, Markov chain Monte Carlo (MCMC)...