Publications
For news about publications, follow us on X/Twitter:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Stefano-V.-Albrechtad-hoc-teamwork
2023
Arrasy Rahman, Ignacio Carlucho, Niklas Höpner, Stefano V. Albrecht
A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning
Journal of Machine Learning Research, 2023
Abstract | BibTex | arXiv | Publisher | Code
JMLRad-hoc-teamworkdeep-rlagent-modellingmulti-agent-rl
Abstract:
Open ad hoc teamwork is the problem of training a single agent to efficiently collaborate with an unknown group of teammates whose composition may change over time. A variable team composition creates challenges for the agent, such as the requirement to adapt to new team dynamics and dealing with changing state vector sizes. These challenges are aggravated in real-world applications where the controlled agent has no access to the full state of the environment. In this work, we develop a class of solutions for open ad hoc teamwork under full and partial observability. We start by developing a solution for the fully observable case that leverages graph neural network architectures to obtain an optimal policy based on reinforcement learning. We then extend this solution to partially observable scenarios by proposing different methodologies that maintain belief estimates over the latent environment states and team composition. These belief estimates are combined with our solution for the fully observable case to compute an agent's optimal policy under partial observability in open ad hoc teamwork. Empirical results demonstrate that our approach can learn efficient policies in open ad hoc teamwork in full and partially observable cases. Further analysis demonstrates that our methods' success is a result of effectively learning the effects of teammates' actions while also inferring the inherent state of the environment under partial observability.
@article{JRahman2022POGPL,
author = {Arrasy Rahman and Ignacio Carlucho and Niklas H\"opner and Stefano V. Albrecht},
title = {A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning},
journal = {Journal of Machine Learning Research},
year = {2023},
volume = {24},
number = {298},
pages = {1--74},
url = {http://jmlr.org/papers/v24/22-099.html}
}
Arrasy Rahman, Elliot Fosong, Ignacio Carlucho, Stefano V. Albrecht
Generating Teammates for Training Robust Ad Hoc Teamwork Agents via Best-Response Diversity
Transactions on Machine Learning Research, 2023
Abstract | BibTex | arXiv | Code
TMLRad-hoc-teamworkmulti-agent-rldeep-rl
Abstract:
Ad hoc teamwork (AHT) is the challenge of designing a robust learner agent that effectively collaborates with unknown teammates without prior coordination mechanisms. Early approaches address the AHT challenge by training the learner with a diverse set of handcrafted teammate policies, usually designed based on an expert's domain knowledge about the policies the learner may encounter. However, implementing teammate policies for training based on domain knowledge is not always feasible. In such cases, recent approaches attempted to improve the robustness of the learner by training it with teammate policies generated by optimising information-theoretic diversity metrics. The problem with optimising existing information-theoretic diversity metrics for teammate policy generation is the emergence of superficially different teammates. When used for AHT training, superficially different teammate behaviours may not improve a learner's robustness during collaboration with unknown teammates. In this paper, we present an automated teammate policy generation method optimising the Best-Response Diversity (BRDiv) metric, which measures diversity based on the compatibility of teammate policies in terms of returns. We evaluate our approach in environments with multiple valid coordination strategies, comparing against methods optimising information-theoretic diversity metrics and an ablation not optimising any diversity metric. Our experiments indicate that optimising BRDiv yields a diverse set of training teammate policies that improve the learner's performance relative to previous teammate generation approaches when collaborating with near-optimal previously unseen teammate policies.
@article{rahman2023BRDiv,
title={Generating Teammates for Training Robust Ad Hoc Teamwork Agents via Best-Response Diversity},
author={Arrasy Rahman and Elliot Fosong and Ignacio Carlucho and Stefano V. Albrecht},
journal={Transactions on Machine Learning Research (TMLR)},
year={2023}
}
Elliot Fosong, Arrasy Rahman, Ignacio Carlucho, Stefano V. Albrecht
Learning Complex Teamwork Tasks Using a Sub-task Curriculum
AAMAS Workshop on Multiagent Sequential Decision Making Under Uncertainty, 2023
Abstract | BibTex | arXiv | Code
AAMASmulti-agent-rlad-hoc-teamworktransfer-learning
Abstract:
Training a team to complete a complex task via multi-agent reinforcement learning can be difficult due to challenges such as policy search in a large policy space, and non-stationarity caused by mutually adapting agents. To facilitate efficient learning of complex multi-agent tasks, we propose an approach which uses an expert-provided curriculum of simpler multi-agent sub-tasks. In each sub-task of the curriculum, a subset of the entire team is trained to acquire sub-task-specific policies. The sub-teams are then merged and transferred to the target task, where their policies are collectively fined tuned to solve the more complex target task. We present MEDoE, a flexible method which identifies situations in the target task where each agent can use its sub-task-specific skills, and uses this information to modulate hyperparameters for learning and exploration during the fine-tuning process. We compare MEDoE to multi-agent reinforcement learning baselines that train from scratch in the full task, and with naïve applications of standard multi-agent reinforcement learning techniques for fine-tuning. We show that MEDoE outperforms baselines which train from scratch or use naïve fine-tuning approaches, requiring significantly fewer total training timesteps to solve a range of complex teamwork tasks.
@inproceedings{fosong2023learning,
title={Learning complex teamwork tasks using a sub-task curriculum},
author={Elliot Fosong, Arrasy Rahman, Ignacio Carlucho and Stefano V. Albrecht},
booktitle={AAMAS Workshop on Multiagent Sequential Decision Making under Uncertainty},
year={2023},
}
2022
Ibrahim H. Ahmed, Cillian Brewitt, Ignacio Carlucho, Filippos Christianos, Mhairi Dunion, Elliot Fosong, Samuel Garcin, Shangmin Guo, Balint Gyevnar, Trevor McInroe, Georgios Papoudakis, Arrasy Rahman, Lukas Schäfer, Massimiliano Tamborski, Giuseppe Vecchio, Cheng Wang, Stefano V. Albrecht
Deep Reinforcement Learning for Multi-Agent Interaction
AI Communications, 2022
Abstract | BibTex | arXiv | Publisher
AICsurveydeep-rlmulti-agent-rlad-hoc-teamworkagent-modellinggoal-recognitionsecurityexplainable-aiautonomous-driving
Abstract:
The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control, with a specific focus on deep reinforcement learning and multi-agent reinforcement learning. Research problems include scalable learning of coordinated agent policies and inter-agent communication; reasoning about the behaviours, goals, and composition of other agents from limited observations; and sample-efficient learning based on intrinsic motivation, curriculum learning, causal inference, and representation learning. This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
@article{albrecht2022aic,
author = {Ahmed, Ibrahim H. and Brewitt, Cillian and Carlucho, Ignacio and Christianos, Filippos and Dunion, Mhairi and Fosong, Elliot and Garcin, Samuel and Guo, Shangmin and Gyevnar, Balint and McInroe, Trevor and Papoudakis, Georgios and Rahman, Arrasy and Schäfer, Lukas and Tamborski, Massimiliano and Vecchio, Giuseppe and Wang, Cheng and Albrecht, Stefano V.},
title = {Deep Reinforcement Learning for Multi-Agent Interaction},
journal = {AI Communications, Special Issue on Multi-Agent Systems Research in the UK},
year = {2022}
}
Arrasy Rahman, Elliot Fosong, Ignacio Carlucho, Stefano V. Albrecht
Towards Robust Ad Hoc Teamwork Agents By Creating Diverse Training Teammates
IJCAI Workshop on Ad Hoc Teamwork, 2022
Abstract | BibTex | arXiv | Code
IJCAIad-hoc-teamworkmulti-agent-rl
Abstract:
Ad hoc teamwork (AHT) is the problem of creating an agent that must collaborate with previously unseen teammates without prior coordination. Many existing AHT methods can be categorised as type-based methods, which require a set of predefined teammates for training. Designing teammate types for training is a challenging issue that determines the generalisation performance of agents when dealing with teammate types unseen during training. In this work, we propose a method to discover diverse teammate types based on maximising best response diversity metrics. We show that our proposed approach yields teammate types that require a wider range of best responses from the learner during collaboration, which potentially improves the robustness of a learner's performance in AHT compared to alternative methods.
@inproceedings{rahman2022towards,
title={Towards Robust Ad Hoc Teamwork Agents By Creating Diverse Training Teammates},
author={Arrasy Rahman and Elliot Fosong and Ignacio Carlucho and Stefano V. Albrecht},
booktitle={IJCAI Workshop on Ad Hoc Teamwork},
year={2022}
}
Elliot Fosong, Arrasy Rahman, Ignacio Carlucho, Stefano V. Albrecht
Few-Shot Teamwork
IJCAI Workshop on Ad Hoc Teamwork, 2022
Abstract | BibTex | arXiv
IJCAIad-hoc-teamworkmulti-agent-rl
Abstract:
We propose the novel few-shot teamwork (FST) problem, where skilled agents trained in a team to complete one task are combined with skilled agents from different tasks, and together must learn to adapt to an unseen but related task. We discuss how the FST problem can be seen as addressing two separate problems: one of reducing the experience required to train a team of agents to complete a complex task; and one of collaborating with unfamiliar teammates to complete a new task. Progress towards solving FST could lead to progress in both multi-agent reinforcement learning and ad hoc teamwork.
@inproceedings{fosong2022fewshot,
title={Few-Shot Teamwork},
author={Elliot Fosong and Arrasy Rahman and Ignacio Carlucho and Stefano V. Albrecht},
booktitle={IJCAI Workshop on Ad Hoc Teamwork},
year={2022}
}
Ignacio Carlucho, Arrasy Rahman, William Ard, Elliot Fosong, Corina Barbalata, Stefano V. Albrecht
Cooperative Marine Operations Via Ad Hoc Teams
IJCAI Workshop on Ad Hoc Teamwork, 2022
Abstract | BibTex | arXiv
IJCAIad-hoc-teamworkmulti-agent-rl
Abstract:
While research in ad hoc teamwork has great potential for solving real-world robotic applications, most developments so far have been focusing on environments with simple dynamics. In this article, we discuss how the problem of ad hoc teamwork can be of special interest for marine robotics and how it can aid marine operations. Particularly, we present a set of challenges that need to be addressed for achieving ad hoc teamwork in underwater environments and we discuss possible solutions based on current state-of-the-art developments in the ad hoc teamwork literature.
@inproceedings{Carlucho2022UnderwaterAHT,
title={Cooperative Marine Operations Via Ad Hoc Teams},
author={Ignacio Carlucho, Arrasy Rahman, William Ard, Elliot Fosong, Corina Barbalata, Stefano V. Albrecht},
booktitle={IJCAI Workshop on Ad Hoc Teamwork},
year={2022}
}
Reuth Mirsky, Ignacio Carlucho, Arrasy Rahman, Elliot Fosong, William Macke, Mohan Sridharan, Peter Stone, Stefano V. Albrecht
A Survey of Ad Hoc Teamwork Research
European Conference on Multi-Agent Systems, 2022
Abstract | BibTex | arXiv
EUMASsurveyad-hoc-teamwork
Abstract:
Ad hoc teamwork is the research problem of designing agents that can collaborate with new teammates without prior coordination. This survey makes a two-fold contribution: First, it provides a structured description of the different facets of the ad hoc teamwork problem. Second, it discusses the progress that has been made in the field so far, and identifies the immediate and long-term open problems that need to be addressed in ad hoc teamwork.
@inproceedings{mirsky2022survey,
title={A Survey of Ad Hoc Teamwork Research},
author={Reuth Mirsky and Ignacio Carlucho and Arrasy Rahman and Elliot Fosong and William Macke and Mohan Sridharan and Peter Stone and Stefano V. Albrecht},
booktitle={European Conference on Multi-Agent Systems (EUMAS)},
year={2022}
}
Arrasy Rahman, Ignacio Carlucho, Niklas Höpner, Stefano V. Albrecht
A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning
arXiv:2210.05448, 2022
Abstract | BibTex | arXiv
ad-hoc-teamworkdeep-rlagent-modelling
Abstract:
Open ad hoc teamwork is the problem of training a single agent to efficiently collaborate with an unknown group of teammates whose composition may change over time. A variable team composition creates challenges for the agent, such as the requirement to adapt to new team dynamics and dealing with changing state vector sizes. These challenges are aggravated in real-world applications where the controlled agent has no access to the full state of the environment. In this work, we develop a class of solutions for open ad hoc teamwork under full and partial observability. We start by developing a solution for the fully observable case that leverages graph neural network architectures to obtain an optimal policy based on reinforcement learning. We then extend this solution to partially observable scenarios by proposing different methodologies that maintain belief estimates over the latent environment states and team composition. These belief estimates are combined with our solution for the fully observable case to compute an agent's optimal policy under partial observability in open ad hoc teamwork. Empirical results demonstrate that our approach can learn efficient policies in open ad hoc teamwork in full and partially observable cases. Further analysis demonstrates that our methods' success is a result of effectively learning the effects of teammates' actions while also inferring the inherent state of the environment under partial observability.
@misc{Rahman2022POGPL,
title={A General Learning Framework for Open Ad Hoc Teamwork Using Graph-based Policy Learning},
author={Arrasy Rahman and Ignacio Carlucho and Niklas H\"opner and Stefano V. Albrecht},
year={2022},
eprint={2210.05448},
archivePrefix={arXiv}
}
2021
Arrasy Rahman, Niklas Höpner, Filippos Christianos, Stefano V. Albrecht
Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning
International Conference on Machine Learning, 2021
Abstract | BibTex | arXiv | Video | Code
ICMLdeep-rlagent-modellingad-hoc-teamwork
Abstract:
Ad hoc teamwork is the challenging problem of designing an autonomous agent which can adapt quickly to collaborate with teammates without prior coordination mechanisms, including joint training. Prior work in this area has focused on closed teams in which the number of agents is fixed. In this work, we consider open teams by allowing agents with different fixed policies to enter and leave the environment without prior notification. Our solution builds on graph neural networks to learn agent models and joint-action value models under varying team compositions. We contribute a novel action-value computation that integrates the agent model and joint-action value model to produce action-value estimates. We empirically demonstrate that our approach successfully models the effects other agents have on the learner, leading to policies that robustly adapt to dynamic team compositions and significantly outperform several alternative methods.
@inproceedings{rahman2021open,
title={Towards Open Ad Hoc Teamwork Using Graph-based Policy Learning},
author={Arrasy Rahman and Niklas H\"opner and Filippos Christianos and Stefano V. Albrecht},
booktitle={International Conference on Machine Learning (ICML)},
year={2021}
}
2020
Arrasy Rahman, Niklas Höpner, Filippos Christianos, Stefano V. Albrecht
Open Ad Hoc Teamwork using Graph-based Policy Learning
arXiv:2006.10412, 2020
Abstract | BibTex | arXiv
deep-rlagent-modellingad-hoc-teamwork
Abstract:
Ad hoc teamwork is the challenging problem of designing an autonomous agent which can adapt quickly to collaborate with previously unknown teammates. Prior work in this area has focused on closed teams in which the number of agents is fixed. In this work, we consider open teams by allowing agents of varying types to enter and leave the team without prior notification. Our proposed solution builds on graph neural networks to learn scalable agent models and value decompositions under varying team sizes, which can be jointly trained with a reinforcement learning agent using discounted returns objectives. We demonstrate empirically that our approach results in agent policies which can robustly adapt to dynamic team composition, and is able to effectively generalize to larger teams than were seen during training.
@misc{rahman2020open,
title={Open Ad Hoc Teamwork using Graph-based Policy Learning},
author={Arrasy Rahman and Niklas H\"opner and Filippos Christianos and Stefano V. Albrecht},
year={2020},
eprint={2006.10412},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
2017
Stefano V. Albrecht, Somchaya Liemhetcharat, Peter Stone
Special Issue on Multiagent Interaction without Prior Coordination: Guest Editorial
Journal of Autonomous Agents and Multi-Agent Systems, 2017
Abstract | BibTex | Publisher | MIPC Workshop Series
JAAMASsurveyad-hoc-teamwork
Abstract:
This special issue of the Journal of Autonomous Agents and Multi-Agent Systems sought research articles on the emerging topic of multiagent interaction without prior coordination. Topics of interest included empirical and theoretical investigations of issues arising from assumptions of prior coordination, as well as solutions in the form of novel models and algorithms for effective multiagent interaction without prior coordination.
@article{ albrecht2017special,
title = {Special Issue on Multiagent Interaction without Prior Coordination: Guest Editorial},
author = {Stefano V. Albrecht and Somchaya Liemhetcharat and Peter Stone},
journal = {Autonomous Agents and Multi-Agent Systems},
volume = {31},
issue = {4},
pages = {765--766},
year = {2017},
publisher = {Springer},
url = {http://dx.doi.org/10.1007/s10458-016-9358-0}
}
Stefano V. Albrecht, Peter Stone
Reasoning about Hypothetical Agent Behaviours and their Parameters
International Conference on Autonomous Agents and Multiagent Systems, 2017
Abstract | BibTex | arXiv
AAMASad-hoc-teamworkagent-modelling
Abstract:
Agents can achieve effective interaction with previously unknown other agents by maintaining beliefs over a set of hypothetical behaviours, or types, that these agents may have. A current limitation in this method is that it does not recognise parameters within type specifications, because types are viewed as blackbox mappings from interaction histories to probability distributions over actions. In this work, we propose a general method which allows an agent to reason about both the relative likelihood of types and the values of any bounded continuous parameters within types. The method maintains individual parameter estimates for each type and selectively updates the estimates for some types after each observation. We propose different methods for the selection of types and the estimation of parameter values. The proposed methods are evaluated in detailed experiments, showing that updating the parameter estimates of a single type after each observation can be sufficient to achieve good performance.
@inproceedings{ albrecht2017reasoning,
title = {Reasoning about Hypothetical Agent Behaviours and their Parameters},
author = {Stefano V. Albrecht and Peter Stone},
booktitle = {Proceedings of the 16th International Conference on Autonomous Agents and Multiagent Systems},
pages = {547--555},
year = {2017}
}
2016
Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy
Belief and Truth in Hypothesised Behaviours
Artificial Intelligence, 2016
Abstract | BibTex | arXiv | Publisher
AIJagent-modellingad-hoc-teamwork
Abstract:
There is a long history in game theory on the topic of Bayesian or “rational” learning, in which each player maintains beliefs over a set of alternative behaviours, or types, for the other players. This idea has gained increasing interest in the artificial intelligence (AI) community, where it is used as a method to control a single agent in a system composed of multiple agents with unknown behaviours. The idea is to hypothesise a set of types, each specifying a possible behaviour for the other agents, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions of the agents. The game theory literature studies this idea primarily in the context of equilibrium attainment. In contrast, many AI applications have a focus on task completion and payoff maximisation. With this perspective in mind, we identify and address a spectrum of questions pertaining to belief and truth in hypothesised types. We formulate three basic ways to incorporate evidence into posterior beliefs and show when the resulting beliefs are correct, and when they may fail to be correct. Moreover, we demonstrate that prior beliefs can have a significant impact on our ability to maximise payoffs in the long-term, and that they can be computed automatically with consistent performance effects. Furthermore, we analyse the conditions under which we are able complete our task optimally, despite inaccuracies in the hypothesised types. Finally, we show how the correctness of hypothesised types can be ascertained during the interaction via an automated statistical analysis.
@article{ albrecht2016belief,
title = {Belief and Truth in Hypothesised Behaviours},
author = {Stefano V. Albrecht and Jacob W. Crandall and Subramanian Ramamoorthy},
journal = {Artificial Intelligence},
volume = {235},
pages = {63--94},
year = {2016},
publisher = {Elsevier},
note = {DOI: 10.1016/j.artint.2016.02.004}
}
2015
Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy
An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types
AAAI Conference on Artificial Intelligence, 2015
Abstract | BibTex | arXiv | Appendix
AAAIagent-modellingad-hoc-teamwork
Abstract:
Many multiagent applications require an agent to learn quickly how to interact with previously unknown other agents. To address this problem, researchers have studied learning algorithms which compute posterior beliefs over a hypothesised set of policies, based on the observed actions of the other agents. The posterior belief is complemented by the prior belief, which specifies the subjective likelihood of policies before any actions are observed. In this paper, we present the first comprehensive empirical study on the practical impact of prior beliefs over policies in repeated interactions. We show that prior beliefs can have a significant impact on the long-term performance of such methods, and that the magnitude of the impact depends on the depth of the planning horizon. Moreover, our results demonstrate that automatic methods can be used to compute prior beliefs with consistent performance effects. This indicates that prior beliefs could be eliminated as a manual parameter and instead be computed automatically.
@inproceedings{ albrecht2015empirical,
title = {An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types},
author = {Stefano V. Albrecht and Jacob W. Crandall and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 29th AAAI Conference on Artificial Intelligence},
pages = {1988--1994},
year = {2015}
}
Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy
E-HBA: Using Action Policies for Expert Advice and Agent Typification
AAAI Workshop on Multiagent Interaction without Prior Coordination, 2015
Abstract | BibTex | arXiv | Appendix
AAAIagent-modellingad-hoc-teamwork
Abstract:
Past research has studied two approaches to utilise predefined policy sets in repeated interactions: as experts, to dictate our own actions, and as types, to characterise the behaviour of other agents. In this work, we bring these complementary views together in the form of a novel meta-algorithm, called Expert-HBA (E-HBA), which can be applied to any expert algorithm that considers the average (or total) payoff an expert has yielded in the past. E-HBA gradually mixes the past payoff with a predicted future payoff, which is computed using the type-based characterisation. We present results from a comprehensive set of repeated matrix games, comparing the performance of several well-known expert algorithms with and without the aid of E-HBA. Our results show that E-HBA has the potential to significantly improve the performance of expert algorithms.
@inproceedings{ albrecht2015ehba,
title = {{E-HBA}: Using Action Policies for Expert Advice and Agent Typification},
author = {Stefano V. Albrecht and Jacob W. Crandall and Subramanian Ramamoorthy},
booktitle = {AAAI Workshop on Multiagent Interaction without Prior Coordination},
address = {Austin, Texas, USA},
month = {January},
year = {2015}
}
2013
Stefano V. Albrecht, Subramanian Ramamoorthy
A Game-Theoretic Model and Best-Response Learning Method for Ad Hoc Coordination in Multiagent Systems
International Conference on Autonomous Agents and Multiagent Systems, 2013
Abstract | BibTex | arXiv (full technical report) | Extended Abstract
AAMASad-hoc-teamworkagent-modelling
Abstract:
The ad hoc coordination problem is to design an autonomous agent which is able to achieve optimal flexibility and efficiency in a multiagent system with no mechanisms for prior coordination. We conceptualise this problem formally using a game-theoretic model, called the stochastic Bayesian game, in which the behaviour of a player is determined by its private information, or type. Based on this model, we derive a solution, called Harsanyi-Bellman Ad Hoc Coordination (HBA), which utilises the concept of Bayesian Nash equilibrium in a planning procedure to find optimal actions in the sense of Bellman optimal control. We evaluate HBA in a multiagent logistics domain called level-based foraging, showing that it achieves higher flexibility and efficiency than several alternative algorithms. We also report on a human-machine experiment at a public science exhibition in which the human participants played repeated Prisoner's Dilemma and Rock-Paper-Scissors against HBA and alternative algorithms, showing that HBA achieves equal efficiency and a significantly higher welfare and winning rate.
@inproceedings{ albrecht2013game,
title = {A Game-Theoretic Model and Best-Response Learning Method for Ad Hoc Coordination in Multiagent Systems},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems},
address = {St. Paul, Minnesota, USA},
month = {May},
year = {2013}
}
2012
Stefano V. Albrecht, Subramanian Ramamoorthy
Comparative Evaluation of Multiagent Learning Algorithms in a Diverse Set of Ad Hoc Team Problems
International Conference on Autonomous Agents and Multiagent Systems, 2012
Abstract | BibTex | arXiv
AAMASmulti-agent-rlad-hoc-teamwork
Abstract:
This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical multiagent systems applications. Prior work in multiagent learning has often been focussed on homogeneous groups of agents, meaning that all agents were identical and a priori aware of this fact. Also, those algorithms that are specifically designed for ad hoc team problems are typically evaluated in teams of agents with fixed behaviours, as opposed to agents which are adapting their behaviours. In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems. All teams consist of agents which are continuously adapting their behaviours. The algorithms are evaluated with respect to a comprehensive characterisation of repeated matrix games, using performance criteria that include considerations such as attainment of equilibrium, social welfare and fairness. Our main conclusion is that there is no clear winner. However, the comparative evaluation also highlights the relative strengths of different algorithms with respect to the type of performance criteria, e.g., social welfare vs. attainment of equilibrium.
@inproceedings{ albrecht2012comparative,
title = {Comparative Evaluation of {MAL} Algorithms in a Diverse Set of Ad Hoc Team Problems},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems},
pages = {349--356},
year = {2012}
}