Publications

Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimation
Selected tags (click to remove):
Stefano-V.-Albrechtmulti-agent-rl
2023
Elliot Fosong, Arrasy Rahman, Ignacio Carlucho, Stefano V. Albrecht
Learning Complex Teamwork Tasks Using a Sub-task Curriculum
arXiv:2302.04944, 2023
Abstract | BibTex | arXiv
multi-agent-rlad-hoc-teamwork
Abstract:
Training a team to complete a complex task via multi-agent reinforcement learning can be difficult due to challenges such as policy search in a large policy space, and non-stationarity caused by mutually adapting agents. To facilitate efficient learning of complex multi-agent tasks, we propose an approach which uses an expert-provided curriculum of simpler multi-agent sub-tasks. In each sub-task of the curriculum, a subset of the entire team is trained to acquire sub-task-specific policies. The sub-teams are then merged and transferred to the target task, where their policies are collectively fined tuned to solve the more complex target task. We present MEDoE, a flexible method which identifies situations in the target task where each agent can use its sub-task-specific skills, and uses this information to modulate hyperparameters for learning and exploration during the fine-tuning process. We compare MEDoE to multi-agent reinforcement learning baselines that train from scratch in the full task, and with naïve applications of standard multi-agent reinforcement learning techniques for fine-tuning. We show that MEDoE outperforms baselines which train from scratch or use naïve fine-tuning approaches, requiring significantly fewer total training timesteps to solve a range of complex teamwork tasks.
@misc{fosong2023learning,
title={Learning complex teamwork tasks using a sub-task curriculum},
author={Elliot Fosong, Arrasy Rahman, Ignacio Carlucho and Stefano V. Albrecht},
year={2023},
eprint={2302.04944},
archivePrefix={arXiv},
primaryClass={cs.MA}
}
Arrasy Rahman, Elliot Fosong, Ignacio Carlucho, Stefano V. Albrecht
Generating Diverse Teammates to Train Robust Agents For Ad Hoc Teamwork
arXiv:2207.14138, 2023
Abstract | BibTex | arXiv
multi-agent-rlad-hoc-teamworkdeep-rl
Abstract:
Ad hoc teamwork (AHT) is the challenge of designing a learner that effectively collaborates with unknown teammates without prior coordination mechanisms. Early approaches address the AHT challenge by training the learner with a diverse set of handcrafted teammate policies, usually designed based on an expert's domain knowledge about the policies the learner may encounter. However, implementing teammate policies for training based on domain knowledge is not always feasible. In such cases, recent approaches attempted to improve the robustness of the learner by training it with teammate policies generated by optimising information-theoretic diversity metrics. However, optimising information-theoretic diversity metrics may generate teammates with superficially different behaviours, which does not necessarily result in a robust learner that can effectively collaborate with unknown teammates. In this paper, we present an automated teammate policy generation method optimising the Best-Response Diversity (BRDiv) metric, which measures diversity based on the compatibility of teammate policies in terms of returns. We evaluate our approach in environments with multiple valid coordination strategies, comparing against methods optimising information-theoretic diversity metrics and an ablation not optimising any diversity metric. Our experiments indicate that optimising BRDiv yields a diverse set of training teammate policies that improve the learner's performance relative to previous teammate generation approaches when collaborating with near-optimal previously unseen teammate policies.
@misc{Rahman2023Diverse,
author = {Rahman, Arrasy and Fosong, Elliot and Carlucho, Ignacio and Albrecht, Stefano V.},
title = {Generating Diverse Teammates to Train Robust Agents For Ad Hoc Teamwork},
eprint={2207.14138},
archivePrefix={arXiv}
year = {2023},
}
Lukas Schäfer, Oliver Slumbers, Stephen McAleer, Yali Du, Stefano V. Albrecht, David Mguni
Ensemble Value Functions for Efficient Exploration in Multi-Agent Reinforcement Learning
arxiv:2302.03439, 2023
Abstract | BibTex | arXiv
multi-agent-rldeep-rl
Abstract:
Cooperative multi-agent reinforcement learning (MARL) requires agents to explore to learn to cooperate. Existing value-based MARL algorithms commonly rely on random exploration, such as ϵ-greedy, which is inefficient in discovering multi-agent cooperation. Additionally, the environment in MARL appears non-stationary to any individual agent due to the simultaneous training of other agents, leading to highly variant and thus unstable optimisation signals. In this work, we propose ensemble value functions for multi-agent exploration (EMAX), a general framework to extend any value-based MARL algorithm. EMAX trains ensembles of value functions for each agent to address the key challenges of exploration and non-stationarity: (1) The uncertainty of value estimates across the ensemble is used in a UCB policy to guide the exploration of agents to parts of the environment which require cooperation. (2) Average value estimates across the ensemble serve as target values. These targets exhibit lower variance compared to commonly applied target networks and we show that they lead to more stable gradients during the optimisation. We instantiate three value-based MARL algorithms with EMAX, independent DQN, VDN and QMIX, and evaluate them in 21 tasks across four environments. Using ensembles of five value functions, EMAX improves sample efficiency and final evaluation returns of these algorithms by 54%, 55%, and 844%, respectively, averaged all 21 tasks.
@misc{schaefer2023emax,
title={Ensemble Value Functions for Efficient Exploration in Multi-Agent Reinforcement Learning},
author={Lukas Schäfer and Oliver Slumbers and Stephen McAleer and Yali Du and Stefano V. Albrecht and David Mguni},
year={2023},
eprint={2302.03439},
archivePrefix={arXiv},
primaryClass={cs.MA}
}
Callum Tilbury, Filippos Christianos, Stefano V. Albrecht
Revisiting the Gumbel-Softmax in MADDPG
arXiv:2302.11793, 2023
Abstract | BibTex | arXiv
multi-agent-rldeep-rl
Abstract:
MADDPG is an algorithm in multi-agent reinforcement learning (MARL) that extends the popular single-agent method, DDPG, to multi-agent scenarios. Importantly, DDPG is an algorithm designed for continuous action spaces, where the gradient of the state-action value function exists. For this algorithm to work in discrete action spaces, discrete gradient estimation must be performed. For MADDPG, the Gumbel-Softmax (GS) estimator is used -- a reparameterisation which relaxes a discrete distribution into a similar continuous one. This method, however, is statistically biased, and a recent MARL benchmarking paper suggests that this bias makes MADDPG perform poorly in grid-world situations, where the action space is discrete. Fortunately, many alternatives to the GS exist, boasting a wide range of properties. This paper explores several of these alternatives and integrates them into MADDPG for discrete grid-world scenarios. The corresponding impact on various performance metrics is then measured and analysed. It is found that one of the proposed estimators performs significantly better than the original GS in several tasks, achieving up to 55% higher returns, along with faster convergence.
@misc{tilbury2023revisitingmaddpg,
title={Revisiting the Gumbel-Softmax in MADDPG},
author={Callum Tilbury and Filippos Christianos and Stefano V. Albrecht},
year={2023},
eprint={2302.11793},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
2022
Stefano V. Albrecht, Michael Wooldridge
Special Issue on Multi-Agent Systems Research in the United Kingdom: Guest Editorial
AI Communications, 2022
Abstract | BibTex | Publisher | Special Issue
AICsurveydeep-rlmulti-agent-rlagent-modelling
Abstract:
The purpose of this special issue is to showcase current multi-agent systems research led by university and industry groups based in the United Kingdom. Research groups and institutes in the UK which have significant activity in multi-agent systems research were invited to submit an article describing: (1) the technical problems in multi-agent systems tackled by the group (their core research agenda), including applications and industry collaboration; (2) the main approaches developed by the group and any key results achieved; and (3) important open challenges in multi-agent systems research from the perspective of the group.
@article{albrecht2020special,
title = {Special Issue on Multi-Agent Systems Research in the United Kingdom: Guest Editorial},
author = {Stefano V. Albrecht and Michael Wooldridge},
journal = {AI Communications},
volume = {35},
number = {4},
year = {2022},
publisher = {IOS Press},
url = {https://content.iospress.com/articles/ai-communications/aic229003}
}
Ibrahim H. Ahmed, Cillian Brewitt, Ignacio Carlucho, Filippos Christianos, Mhairi Dunion, Elliot Fosong, Samuel Garcin, Shangmin Guo, Balint Gyevnar, Trevor McInroe, Georgios Papoudakis, Arrasy Rahman, Lukas Schäfer, Massimiliano Tamborski, Giuseppe Vecchio, Cheng Wang, Stefano V. Albrecht
Deep Reinforcement Learning for Multi-Agent Interaction
AI Communications, 2022
Abstract | BibTex | arXiv | Publisher
AICsurveydeep-rlmulti-agent-rlad-hoc-teamworkagent-modellinggoal-recognitionsecurityexplainable-aiautonomous-driving
Abstract:
The development of autonomous agents which can interact with other agents to accomplish a given task is a core area of research in artificial intelligence and machine learning. Towards this goal, the Autonomous Agents Research Group develops novel machine learning algorithms for autonomous systems control, with a specific focus on deep reinforcement learning and multi-agent reinforcement learning. Research problems include scalable learning of coordinated agent policies and inter-agent communication; reasoning about the behaviours, goals, and composition of other agents from limited observations; and sample-efficient learning based on intrinsic motivation, curriculum learning, causal inference, and representation learning. This article provides a broad overview of the ongoing research portfolio of the group and discusses open problems for future directions.
@article{albrecht2022aic,
author = {Ahmed, Ibrahim H. and Brewitt, Cillian and Carlucho, Ignacio and Christianos, Filippos and Dunion, Mhairi and Fosong, Elliot and Garcin, Samuel and Guo, Shangmin and Gyevnar, Balint and McInroe, Trevor and Papoudakis, Georgios and Rahman, Arrasy and Schäfer, Lukas and Tamborski, Massimiliano and Vecchio, Giuseppe and Wang, Cheng and Albrecht, Stefano V.},
title = {Deep Reinforcement Learning for Multi-Agent Interaction},
journal = {AI Communications, Special Issue on Multi-Agent Systems Research in the UK},
year = {2022}
}
Shangmin Guo, Yi Ren, Kory Mathewson, Simon Kirby, Stefano V. Albrecht, Kenny Smith
Expressivity of Emergent Languages is a Trade-off between Contextual Complexity and Unpredictability
International Conference on Learning Representations, 2022
Abstract | BibTex | arXiv | Code
ICLRmulti-agent-rlemergent-communication
Abstract:
Researchers are using deep learning models to explore the emergence of language in various language games, where simulated agents interact and develop an emergent language to solve a task. We focus on the factors which determine the expressivity of emergent languages, which reflects the amount of information about input spaces those languages are capable of encoding. We measure the expressivity of emergent languages based on their generalisation performance across different games, and demonstrate that the expressivity of emergent languages is a trade-off between the complexity and unpredictability of the context those languages are used in. Another novel contribution of this work is the discovery of message type collapse. We also show that using the contrastive loss proposed by Chen et al. (2020) can alleviate this problem, compared with the standard referential loss used by the existing works.
@inproceedings{guo2022expressivity,
title={Expressivity of Emergent Languages is a Trade-off between Contextual Complexity and Unpredictability},
author={Shangmin Guo and Yi Ren and Kory Mathewson and Simon Kirby and Stefano V. Albrecht and Kenny Smith},
booktitle={International Conference on Learning Representations (ICLR)},
year={2022}
}
Arrasy Rahman, Elliot Fosong, Ignacio Carlucho, Stefano V. Albrecht
Towards Robust Ad Hoc Teamwork Agents By Creating Diverse Training Teammates
IJCAI Workshop on Ad Hoc Teamwork, 2022
Abstract | BibTex | arXiv | Code
IJCAIad-hoc-teamworkmulti-agent-rl
Abstract:
Ad hoc teamwork (AHT) is the problem of creating an agent that must collaborate with previously unseen teammates without prior coordination. Many existing AHT methods can be categorised as type-based methods, which require a set of predefined teammates for training. Designing teammate types for training is a challenging issue that determines the generalisation performance of agents when dealing with teammate types unseen during training. In this work, we propose a method to discover diverse teammate types based on maximising best response diversity metrics. We show that our proposed approach yields teammate types that require a wider range of best responses from the learner during collaboration, which potentially improves the robustness of a learner's performance in AHT compared to alternative methods.
@inproceedings{rahman2022towards,
title={Towards Robust Ad Hoc Teamwork Agents By Creating Diverse Training Teammates},
author={Arrasy Rahman and Elliot Fosong and Ignacio Carlucho and Stefano V. Albrecht},
booktitle={IJCAI Workshop on Ad Hoc Teamwork},
year={2022}
}
Elliot Fosong, Arrasy Rahman, Ignacio Carlucho, Stefano V. Albrecht
Few-Shot Teamwork
IJCAI Workshop on Ad Hoc Teamwork, 2022
Abstract | BibTex | arXiv
IJCAIad-hoc-teamworkmulti-agent-rl
Abstract:
We propose the novel few-shot teamwork (FST) problem, where skilled agents trained in a team to complete one task are combined with skilled agents from different tasks, and together must learn to adapt to an unseen but related task. We discuss how the FST problem can be seen as addressing two separate problems: one of reducing the experience required to train a team of agents to complete a complex task; and one of collaborating with unfamiliar teammates to complete a new task. Progress towards solving FST could lead to progress in both multi-agent reinforcement learning and ad hoc teamwork.
@inproceedings{fosong2022fewshot,
title={Few-Shot Teamwork},
author={Elliot Fosong and Arrasy Rahman and Ignacio Carlucho and Stefano V. Albrecht},
booktitle={IJCAI Workshop on Ad Hoc Teamwork},
year={2022}
}
Ignacio Carlucho, Arrasy Rahman, William Ard, Elliot Fosong, Corina Barbalata, Stefano V. Albrecht
Cooperative Marine Operations Via Ad Hoc Teams
IJCAI Workshop on Ad Hoc Teamwork, 2022
Abstract | BibTex | arXiv
IJCAIad-hoc-teamworkmulti-agent-rl
Abstract:
While research in ad hoc teamwork has great potential for solving real-world robotic applications, most developments so far have been focusing on environments with simple dynamics. In this article, we discuss how the problem of ad hoc teamwork can be of special interest for marine robotics and how it can aid marine operations. Particularly, we present a set of challenges that need to be addressed for achieving ad hoc teamwork in underwater environments and we discuss possible solutions based on current state-of-the-art developments in the ad hoc teamwork literature.
@inproceedings{Carlucho2022UnderwaterAHT,
title={Cooperative Marine Operations Via Ad Hoc Teams},
author={Ignacio Carlucho, Arrasy Rahman, William Ard, Elliot Fosong, Corina Barbalata, Stefano V. Albrecht},
booktitle={IJCAI Workshop on Ad Hoc Teamwork},
year={2022}
}
Aleksandar Krnjaic, Jonathan D. Thomas, Georgios Papoudakis, Lukas Schäfer, Peter Börsting, Stefano V. Albrecht
Scalable Multi-Agent Reinforcement Learning for Warehouse Logistics with Robotic and Human Co-Workers
arXiv:2212.11498, 2022
Abstract | BibTex | arXiv
deep-rlmulti-agent-rl
Abstract:
This project leverages advances in Multi-Agent Reinforcement Learning (MARL) to improve the efficiency and flexibility of order-picking systems for large-scale commercial warehouses. We envision a warehouse of the future in which dozens or even hundreds of mobile robots and humans work together to collect and deliver items. The fundamental problem we tackle - called the order-picking problem - is how these agents must coordinate their movement and actions in the warehouse to maximise performance (e.g. order throughput) under given resource constraints. MARL algorithms implement a paradigm whereby the agents learn via a process of trial-and-error how to optimally collaborate with one another. Established industry methods using fixed heuristics require a large engineering effort to operate in specific warehouse configurations and resource constraints, and their achievable performance is often limited by heuristic design limitations. In contrast, the MARL framework can be applied to any warehouse configuration (e.g. size, layout, number/types of workers, item replenishment frequency) and resource constraints, and the learning process maximises performance by optimising agent behaviours for the specified warehouse environment.
@misc{Krnjaic2022HSNAC,
title={Scalable Multi-Agent Reinforcement Learning for Warehouse Logistics with Robotic and Human Co-Workers},
author={Aleksandar Krnjaic and Jonathan D. Thomas and Georgios Papoudakis and Lukas Sch\"afer and Peter B\"orsting and Stefano V. Albrecht,
year={2022},
eprint={2212.11498},
archivePrefix={arXiv}
}
Lukas Schäfer, Filippos Christianos, Amos Storkey, Stefano V. Albrecht
Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning
arxiv:2207.02249, 2022
Abstract | BibTex | arXiv
deep-rlmulti-agent-rl
Abstract:
Successful deployment of multi-agent reinforcement learning often requires agents to adapt their behaviour. In this work, we discuss the problem of teamwork adaptation in which a team of agents needs to adapt their policies to solve novel tasks with limited fine-tuning. Motivated by the intuition that agents need to be able to identify and distinguish tasks in order to adapt their behaviour to the current task, we propose to learn multi-agent task embeddings (MATE). These task embeddings are trained using an encoder-decoder architecture optimised for reconstruction of the transition and reward functions which uniquely identify tasks. We show that a team of agents is able to adapt to novel tasks when provided with task embeddings. We propose three MATE training paradigms: independent MATE, centralised MATE, and mixed MATE which vary in the information used for the task encoding. We show that the embeddings learned by MATE identify tasks and provide useful information which agents leverage during adaptation to novel tasks.
@misc{schaefer2022mate,
title={Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning},
author={Lukas Schäfer and Filippos Christianos and Amos Storkey and Stefano V. Albrecht},
year={2022},
eprint={2207.02249},
archivePrefix={arXiv},
primaryClass={cs.MA}
}
Filippos Christianos, Georgios Papoudakis, Stefano V. Albrecht
Pareto Actor-Critic for Equilibrium Selection in Multi-Agent Reinforcement Learning
arXiv:2209.14344, 2022
Abstract | BibTex | arXiv
deep-rlmulti-agent-rl
Abstract:
Equilibrium selection in multi-agent games refers to the problem of selecting a Pareto-optimal equilibrium. It has been shown that many state-of-the-art multi-agent reinforcement learning (MARL) algorithms are prone to converging to Pareto-dominated equilibria due to the uncertainty each agent has about the policy of the other agents during training. To address suboptimal equilibrium selection, we propose Pareto-AC (PAC), an actor-critic algorithm that utilises a simple principle of no-conflict games (a superset of cooperative games with identical rewards): each agent can assume the others will choose actions that will lead to a Pareto-optimal equilibrium. We evaluate PAC in a diverse set of multi-agent games and show that it converges to higher episodic returns compared to alternative MARL algorithms, as well as successfully converging to a Pareto-optimal equilibrium in a range of matrix games. Finally, we propose a graph neural network extension which is shown to efficiently scale in games with up to 15 agents.
@misc{christianos2022pareto,
title={Pareto Actor-Critic for Equilibrium Selection in Multi-Agent Reinforcement Learning},
author={Filippos Christianos and Georgios Papoudakis and Stefano V. Albrecht},
year={2022},
eprint={2209.14344},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
2021
Georgios Papoudakis, Filippos Christianos, Lukas Schäfer, Stefano V. Albrecht
Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks
Conference on Neural Information Processing Systems, Datasets and Benchmarks Track, 2021
Abstract | BibTex | arXiv | Code
NeurIPSdeep-rlmulti-agent-rl
Abstract:
Multi-agent deep reinforcement learning (MARL) suffers from a lack of commonly-used evaluation tasks and criteria, making comparisons between approaches difficult. In this work, we consistently evaluate and compare three different classes of MARL algorithms (independent learning, centralised multi-agent policy gradient, value decomposition) in a diverse range of cooperative multi-agent learning tasks. Our experiments serve as a reference for the expected performance of algorithms across different learning tasks, and we provide insights regarding the effectiveness of different learning approaches. We open-source EPyMARL, which extends the PyMARL codebase [Samvelyan et al., 2019] to include additional algorithms and allow for flexible configuration of algorithm implementation details such as parameter sharing. Finally, we open-source two environments for multi-agent research which focus on coordination under sparse rewards.
@inproceedings{papoudakis2021benchmarking,
title={Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms in Cooperative Tasks},
author={Georgios Papoudakis and Filippos Christianos and Lukas Sch\"afer and Stefano V. Albrecht},
booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (NeurIPS)},
year={2021},
url = {http://arxiv.org/abs/2006.07869},
openreview = {https://openreview.net/forum?id=cIrPX-Sn5n},
code = {https://github.com/uoe-agents/epymarl}
}
Filippos Christianos, Georgios Papoudakis, Arrasy Rahman, Stefano V. Albrecht
Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing
International Conference on Machine Learning, 2021
Abstract | BibTex | arXiv | Video | Code
ICMLdeep-rlmulti-agent-rl
Abstract:
Sharing parameters in multi-agent deep reinforcement learning has played an essential role in allowing algorithms to scale to a large number of agents. Parameter sharing between agents significantly decreases the number of trainable parameters, shortening training times to tractable levels, and has been linked to more efficient learning. However, having all agents share the same parameters can also have a detrimental effect on learning. We demonstrate the impact of parameter sharing methods on training speed and converged returns, establishing that when applied indiscriminately, their effectiveness is highly dependent on the environment. We propose a novel method to automatically identify agents which may benefit from sharing parameters by partitioning them based on their abilities and goals. Our approach combines the increased sample efficiency of parameter sharing with the representational capacity of multiple independent networks to reduce training time and increase final returns.
@inproceedings{christianos2021scaling,
title={Scaling Multi-Agent Reinforcement Learning with Selective Parameter Sharing},
author={Filippos Christianos and Georgios Papoudakis and Arrasy Rahman and Stefano V. Albrecht},
booktitle={International Conference on Machine Learning (ICML)},
year={2021}
}
Shangmin Guo, Yi Ren, Kory Mathewson, Simon Kirby, Stefano V. Albrecht, Kenny Smith
Expressivity of Emergent Language is a Trade-off between Contextual Complexity and Unpredictability
arXiv:2106.03982, 2021
Abstract | BibTex | arXiv
multi-agent-rlemergent-communication
Abstract:
Researchers are now using deep learning models to explore the emergence of language in various language games, where simulated agents interact and develop an emergent language to solve a task. Although it is quite intuitive that different types of language games posing different communicative challenges might require emergent languages which encode different levels of information, there is no existing work exploring the expressivity of the emergent languages. In this work, we propose a definition of partial order between expressivity based on the generalisation performance across different language games. We also validate the hypothesis that expressivity of emergent languages is a trade-off between the complexity and unpredictability of the context those languages are used in. Our second novel contribution is introducing contrastive loss into the implementation of referential games. We show that using our contrastive loss alleviates the collapse of message types seen using standard referential loss functions.
@misc{guo2021expressivity,
title={Expressivity of Emergent Language is a Trade-off between Contextual Complexity and Unpredictability},
author={Shangmin Guo and Yi Ren and Kory Mathewson and Simon Kirby and Stefano V. Albrecht and Kenny Smith},
year={2021},
eprint={2106.03982},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
2020
Filippos Christianos, Lukas Schäfer, Stefano V. Albrecht
Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning
Conference on Neural Information Processing Systems, 2020
Abstract | BibTex | arXiv
NeurIPSdeep-rlmulti-agent-rl
Abstract:
Exploration in multi-agent reinforcement learning is a challenging problem, especially in environments with sparse rewards. We propose a general method for efficient exploration by sharing experience amongst agents. Our proposed algorithm, called Shared Experience Actor-Critic (SEAC), applies experience sharing in an actor-critic framework. We evaluate SEAC in a collection of sparse-reward multi-agent environments and find that it consistently outperforms two baselines and two state-of-the-art algorithms by learning in fewer steps and converging to higher returns. In some harder environments, experience sharing makes the difference between learning to solve the task and not learning at all.
@inproceedings{christianos2020shared,
title={Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning},
author={Filippos Christianos and Lukas Sch\"afer and Stefano V. Albrecht},
booktitle={34th Conference on Neural Information Processing Systems},
year={2020}
}
Georgios Papoudakis, Filippos Christianos , Lukas Schäfer, Stefano V. Albrecht
Comparative Evaluation of Multi-Agent Deep Reinforcement Learning Algorithms
arXiv:2006.07869, 2020
Abstract | BibTex | arXiv
deep-rlmulti-agent-rl
Abstract:
Multi-agent deep reinforcement learning (MARL) suffers from a lack of commonly-used evaluation tasks and criteria, making comparisons between approaches difficult. In this work, we evaluate and compare three different classes of MARL algorithms (independent learners, centralised training with decentralised execution, and value decomposition) in a diverse range of multi-agent learning tasks. Our results show that (1) algorithm performance depends strongly on environment properties and no algorithm learns efficiently across all learning tasks; (2) independent learners often achieve equal or better performance than more complex algorithms; (3) tested algorithms struggle to solve multi-agent tasks with sparse rewards. We report detailed empirical data, including a reliability analysis, and provide insights into the limitations of the tested algorithms.
@misc{papoudakis2020comparative,
title={Comparative Evaluation of Multi-Agent Deep Reinforcement Learning Algorithms},
author={Georgios Papoudakis and Filippos Christianos and Lukas Sch\"afer and Stefano V. Albrecht},
year={2020},
eprint={2006.07869},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
2019
Georgios Papoudakis, Filippos Christianos, Arrasy Rahman, Stefano V. Albrecht
Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning
arXiv:1906.04737, 2019
Abstract | BibTex | arXiv
surveydeep-rlmulti-agent-rl
Abstract:
Recent developments in deep reinforcement learning are concerned with creating decision-making agents which can perform well in various complex domains. A particular approach which has received increasing attention is multi-agent reinforcement learning, in which multiple agents learn concurrently to coordinate their actions. In such multi-agent environments, additional learning problems arise due to the continually changing decision-making policies of agents. This paper surveys recent works that address the non-stationarity problem in multi-agent deep reinforcement learning. The surveyed methods range from modifications in the training procedure, such as centralized training, to learning representations of the opponent's policy, meta-learning, communication, and decentralized learning. The survey concludes with a list of open problems and possible lines of future research.
@misc{papoudakis2019dealing,
title={Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning},
author={Georgios Papoudakis and Filippos Christianos and Arrasy Rahman and Stefano V. Albrecht},
year={2019},
eprint={1906.04737},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
2012
Stefano V. Albrecht, Subramanian Ramamoorthy
Comparative Evaluation of Multiagent Learning Algorithms in a Diverse Set of Ad Hoc Team Problems
International Conference on Autonomous Agents and Multiagent Systems, 2012
Abstract | BibTex | arXiv
AAMASmulti-agent-rlad-hoc-teamwork
Abstract:
This paper is concerned with evaluating different multiagent learning (MAL) algorithms in problems where individual agents may be heterogenous, in the sense of utilizing different learning strategies, without the opportunity for prior agreements or information regarding coordination. Such a situation arises in ad hoc team problems, a model of many practical multiagent systems applications. Prior work in multiagent learning has often been focussed on homogeneous groups of agents, meaning that all agents were identical and a priori aware of this fact. Also, those algorithms that are specifically designed for ad hoc team problems are typically evaluated in teams of agents with fixed behaviours, as opposed to agents which are adapting their behaviours. In this work, we empirically evaluate five MAL algorithms, representing major approaches to multiagent learning but originally developed with the homogeneous setting in mind, to understand their behaviour in a set of ad hoc team problems. All teams consist of agents which are continuously adapting their behaviours. The algorithms are evaluated with respect to a comprehensive characterisation of repeated matrix games, using performance criteria that include considerations such as attainment of equilibrium, social welfare and fairness. Our main conclusion is that there is no clear winner. However, the comparative evaluation also highlights the relative strengths of different algorithms with respect to the type of performance criteria, e.g., social welfare vs. attainment of equilibrium.
@inproceedings{ albrecht2012comparative,
title = {Comparative Evaluation of {MAL} Algorithms in a Diverse Set of Ad Hoc Team Problems},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems},
pages = {349--356},
year = {2012}
}