Ulster University Logo

Ulster Institutional Repository

Decentralized Bayesian Reinforcement Learning for Online Agent Collaboration

Biomedical Sciences Research Institute Computer Science Research Institute Environmental Sciences Research Institute Nanotechnology & Advanced Materials Research Institute

Teacy, W. T. L., Chalkiadakis, G., Farinelli, A., Rogers, A., Jennings, N. R., McClean, S. and Parr, G. (2012) Decentralized Bayesian Reinforcement Learning for Online Agent Collaboration. In: 11th International Conference on Autonomous Agents and Multiagent Systems, Valencia, Spain. International Foundation for Autonomous Agents and Multiagent Systems. 8 pp. [Conference contribution]

[img]
Preview
PDF - Accepted Version
393Kb

Abstract

Solving complex but structured problems in a decentralized manner via multiagent collaboration has received much attention in recent years. This is natural, as on one hand, multiagent systems usually possess a structure that determines the allowable interactions among the agents; and on the other hand, the single most pressing need in a cooperative multiagent system is to coordinate the local policies of autonomous agents with restricted capabilities to serve a system-wide goal. The presence of uncertainty makes this even more challenging, as the agents face the additional need to learn the unknown environment parameters while forming (and following) local policies in an online fashion. In this paper, we provide the first Bayesian reinforcement learning (BRL) approach for distributed coordination and learning in a cooperative multiagent system by devising two solutions to this type of problem. More specifically, we show how the Value of Perfect Information (VPI) can be used to perform efficient decentralised exploration in both model-based and model-free BRL, and in the latter case, provide a closed form solution for VPI, correcting a decade old result by Dearden, Friedman and Russell. To evaluate these solutions, we present experimental results comparing their relative merits, and demonstrate empirically that both solutions outperform an existing multiagent learning method, representative of the state-of-the-art.

Item Type:Conference contribution (Paper)
Keywords:multiagent learning, Bayesian techniques, uncertainty
Faculties and Schools:Faculty of Computing & Engineering
Faculty of Computing & Engineering > School of Computing and Information Engineering
Research Institutes and Groups:Computer Science Research Institute
Computer Science Research Institute > Information and Communication Engineering
ID Code:21032
Deposited By:Dr Luke Teacy
Deposited On:10 Jul 2012 11:32
Last Modified:10 Jul 2012 11:32

Repository Staff Only: item control page