Publications
For news about publications, follow us on X/Twitter:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Guy-Azran
2024
Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren
Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
AAAI Conference on Artificial Intelligence, 2024
Abstract | BibTex | arXiv | Code | Video
AAAIdeep-rlcausal
Abstract:
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RMs), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Empirical results show that our representations improve sample efficiency and few-shot transfer in a variety of domains.
@inproceedings{azran2024contextual,
title={Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Guy Azran and Mohamad H. Danesh and Stefano V. Albrecht and Sarah Keren},
booktitle={Proceedings of the 38th AAAI Conference on Artificial Intelligence},
year={2024}
}
Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren
Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
ICAPS Workshop on Planning and Reinforcement Learning, 2024
Abstract | BibTex | arXiv | Code | Video
ICAPSdeep-rlcausal
Abstract:
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RMs), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Empirical results show that our representations improve sample efficiency and few-shot transfer in a variety of domains.
@inproceedings{Azran2022enhancing,
title={Contextual Pre-planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Azran, Guy and Danesh, Mohamad H. and Albrecht, Stefano V. and Keren, Sarah},
booktitle={ICAPS Workshop on Planning and Reinforcement Learning (https://prl-theworkshop.github.io/prl2024-icaps/},
year={2024}
}
2023
Guy Azran, Mohamad H Danesh, Stefano V. Albrecht, Sarah Keren
Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
NeurIPS Workshop on Generalization in Planning, 2023
Abstract | BibTex | arXiv
NeurIPSdeep-rlcausal
Abstract:
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RM), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Our empirical evaluation shows that our representations improve sample efficiency and few-shot transfer in a variety of domains.
@inproceedings{azran2023contextual,
title={Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Guy Azran and Mohamad H. Danesh and Stefano V. Albrecht and Sarah Keren},
booktitle={NeurIPS Workshop on Generalization in Planning},
year={2023}
}
Guy Azran, Mohamad H. Danesh, Stefano V. Albrecht, Sarah Keren
Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning
IJCAI Workshop on Planning and Reinforcement Learning, 2023
Abstract | BibTex | arXiv
IJCAIdeep-rlcausal
Abstract:
Recent studies show that deep reinforcement learning (DRL) agents tend to overfit to the task on which they were trained and fail to adapt to minor environment changes. To expedite learning when transferring to unseen tasks, we propose a novel approach to representing the current task using reward machines (RM), state machine abstractions that induce subtasks based on the current task’s rewards and dynamics. Our method provides agents with symbolic representations of optimal transitions from their current abstract state and rewards them for achieving these transitions. These representations are shared across tasks, allowing agents to exploit knowledge of previously encountered symbols and transitions, thus enhancing transfer. Our empirical evaluation shows that our representations improve sample efficiency and few-shot transfer in a variety of domains.
@inproceedings{azran2023contextual,
title={Contextual Pre-Planning on Reward Machine Abstractions for Enhanced Transfer in Deep Reinforcement Learning},
author={Guy Azran and Mohamad H. Danesh and Stefano V. Albrecht and Sarah Keren},
booktitle={IJCAI Workshop on Planning and Reinforcement Learning (https://prl-theworkshop.github.io/)},
year={2023}
}
2022
Guy Azran, Mohamad Hosein Danesh, Stefano V. Albrecht, Sarah Keren
Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings
NeurIPS Workshop on Neuro Causal and Symbolic AI, 2022
Abstract | BibTex
NeurIPSdeep-rlcausal
Abstract:
Deep reinforcement learning (DRL) algorithms have seen great success in performing a plethora of tasks, but often have trouble adapting to changes in the environment. We address this issue by using reward machines (RM), a graph-based abstraction of the underlying task to represent the current setting or context. Using a graph neural network (GNN), we embed the RMs into deep latent vector representations and provide them to the agent to enhance its ability to adapt to new contexts. To the best of our knowledge, this is the first work to embed contextual abstractions and let the agent decide how to use them. Our preliminary empirical evaluation demonstrates improved sample efficiency of our approach upon context transfer on a set of grid navigation tasks.
@inproceedings{Azran2022enhancing,
title={Enhancing Transfer of Reinforcement Learning Agents with Abstract Contextual Embeddings},
author={Guy Azran and Mohamad Hosein Danesh and Stefano V. Albrecht and Sarah Keren},
booktitle={NeurIPS Workshop on Neuro Causal and Symbolic AI (https://ncsi.cause-lab.net)},
year={2022}
}