Publications
For news about publications, follow us on X/Twitter:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
deep-rlStefano-V.-AlbrechtAmos-Storkey
2023
Lukas Schäfer, Filippos Christianos, Amos Storkey, Stefano V. Albrecht
Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning
NeurIPS Workshop on Generalization in Planning, 2023
Abstract | BibTex | arXiv | Code
NeurIPSmulti-agent-rldeep-rl
Abstract:
Successful deployment of multi-agent reinforcement learning often requires agents to adapt their behaviour. In this work, we discuss the problem of teamwork adaptation in which a team of agents needs to adapt their policies to solve novel tasks with limited fine-tuning. Motivated by the intuition that agents need to be able to identify and distinguish tasks in order to adapt their behaviour to the current task, we propose to learn multi-agent task embeddings (MATE). These task embeddings are trained using an encoder-decoder architecture optimised for reconstruction of the transition and reward functions which uniquely identify tasks. We show that a team of agents is able to adapt to novel tasks when provided with task embeddings. We propose three MATE training paradigms: independent MATE, centralised MATE, and mixed MATE which vary in the information used for the task encoding. We show that the embeddings learned by MATE identify tasks and provide useful information which agents leverage during adaptation to novel tasks.
@inproceedings{schaefer2023mate,
title={Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning},
author={Lukas Schäfer and Filippos Christianos and Amos Storkey and Stefano V. Albrecht},
booktitle={NeurIPS Workshop on Generalization in Planning},
year={2023}
}
Trevor McInroe, Stefano V. Albrecht, Amos Storkey
Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning
arXiv:2310.05723, 2023
Abstract | BibTex | arXiv
deep-rl
Abstract:
Offline pretraining with a static dataset followed by online fine-tuning (offline-to-online, or OtO) is a paradigm that is well matched to a real-world RL deployment process: in few real settings would one deploy an offline policy with no test runs and tuning. In this scenario, we aim to find the best-performing policy within a limited budget of online interactions. Previous work in the OtO setting has focused on correcting for bias introduced by the policy-constraint mechanisms of offline RL algorithms. Such constraints keep the learned policy close to the behavior policy that collected the dataset, but this unnecessarily limits policy performance if the behavior policy is far from optimal. Instead, we forgo policy constraints and frame OtO RL as an exploration problem: we must maximize the benefit of the online data-collection. We study major online RL exploration paradigms, adapting them to work well with the OtO setting. These adapted methods contribute several strong baselines. Also, we introduce an algorithm for planning to go out of distribution (PTGOOD), which targets online exploration in relatively high-reward regions of the state-action space unlikely to be visited by the behavior policy. By leveraging concepts from the Conditional Entropy Bottleneck, PTGOOD encourages data collected online to provide new information relevant to improving the final deployment policy. In that way the limited interaction budget is used effectively. We show that PTGOOD significantly improves agent returns during online fine-tuning and finds the optimal policy in as few as 10k online steps in Walker and in as few as 50k in complex control tasks like Humanoid. Also, we find that PTGOOD avoids the suboptimal policy convergence that many of our baselines exhibit in several environments.
@misc{mcinroe2023planning,
title={Planning to Go Out-of-Distribution in Offline-to-Online Reinforcement Learning},
author={Trevor McInroe and Stefano V. Albrecht and Amos Storkey},
year={2023},
eprint={2310.05723},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
2022
Lukas Schäfer, Filippos Christianos, Amos Storkey, Stefano V. Albrecht
Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning
arxiv:2207.02249, 2022
Abstract | BibTex | arXiv
deep-rlmulti-agent-rl
Abstract:
Successful deployment of multi-agent reinforcement learning often requires agents to adapt their behaviour. In this work, we discuss the problem of teamwork adaptation in which a team of agents needs to adapt their policies to solve novel tasks with limited fine-tuning. Motivated by the intuition that agents need to be able to identify and distinguish tasks in order to adapt their behaviour to the current task, we propose to learn multi-agent task embeddings (MATE). These task embeddings are trained using an encoder-decoder architecture optimised for reconstruction of the transition and reward functions which uniquely identify tasks. We show that a team of agents is able to adapt to novel tasks when provided with task embeddings. We propose three MATE training paradigms: independent MATE, centralised MATE, and mixed MATE which vary in the information used for the task encoding. We show that the embeddings learned by MATE identify tasks and provide useful information which agents leverage during adaptation to novel tasks.
@misc{schaefer2022mate,
title={Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning},
author={Lukas Schäfer and Filippos Christianos and Amos Storkey and Stefano V. Albrecht},
year={2022},
eprint={2207.02249},
archivePrefix={arXiv},
primaryClass={cs.MA}
}