Publications
For news about publications, follow us on X/Twitter:
Click on any author names or tags to filter publications.
All topic tags:
surveydeep-rlmulti-agent-rlagent-modellingad-hoc-teamworkautonomous-drivinggoal-recognitionexplainable-aicausalgeneralisationsecurityemergent-communicationiterated-learningintrinsic-rewardsimulatorstate-estimationdeep-learningtransfer-learning
Selected tags (click to remove):
Subramanian-RamamoorthyStefano-V.-Albrechtagent-modelling
2016
Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy
Belief and Truth in Hypothesised Behaviours
Artificial Intelligence, 2016
Abstract | BibTex | arXiv | Publisher
AIJagent-modellingad-hoc-teamwork
Abstract:
There is a long history in game theory on the topic of Bayesian or “rational” learning, in which each player maintains beliefs over a set of alternative behaviours, or types, for the other players. This idea has gained increasing interest in the artificial intelligence (AI) community, where it is used as a method to control a single agent in a system composed of multiple agents with unknown behaviours. The idea is to hypothesise a set of types, each specifying a possible behaviour for the other agents, and to plan our own actions with respect to those types which we believe are most likely, given the observed actions of the agents. The game theory literature studies this idea primarily in the context of equilibrium attainment. In contrast, many AI applications have a focus on task completion and payoff maximisation. With this perspective in mind, we identify and address a spectrum of questions pertaining to belief and truth in hypothesised types. We formulate three basic ways to incorporate evidence into posterior beliefs and show when the resulting beliefs are correct, and when they may fail to be correct. Moreover, we demonstrate that prior beliefs can have a significant impact on our ability to maximise payoffs in the long-term, and that they can be computed automatically with consistent performance effects. Furthermore, we analyse the conditions under which we are able complete our task optimally, despite inaccuracies in the hypothesised types. Finally, we show how the correctness of hypothesised types can be ascertained during the interaction via an automated statistical analysis.
@article{ albrecht2016belief,
title = {Belief and Truth in Hypothesised Behaviours},
author = {Stefano V. Albrecht and Jacob W. Crandall and Subramanian Ramamoorthy},
journal = {Artificial Intelligence},
volume = {235},
pages = {63--94},
year = {2016},
publisher = {Elsevier},
note = {DOI: 10.1016/j.artint.2016.02.004}
}
2015
Stefano V. Albrecht, Subramanian Ramamoorthy
Are You Doing What I Think You Are Doing? Criticising Uncertain Agent Models
Conference on Uncertainty in Artificial Intelligence, 2015
Abstract | BibTex | arXiv
UAIagent-modelling
Abstract:
The key for effective interaction in many multiagent applications is to reason explicitly about the behaviour of other agents, in the form of a hypothesised behaviour. While there exist several methods for the construction of a behavioural hypothesis, there is currently no universal theory which would allow an agent to contemplate the correctness of a hypothesis. In this work, we present a novel algorithm which decides this question in the form of a frequentist hypothesis test. The algorithm allows for multiple metrics in the construction of the test statistic and learns its distribution during the interaction process, with asymptotic correctness guarantees. We present results from a comprehensive set of experiments, demonstrating that the algorithm achieves high accuracy and scalability at low computational costs.
@inproceedings{ albrecht2015criticising,
title = {Are You Doing What {I} Think You Are Doing? Criticising Uncertain Agent Models},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 31st Conference on Uncertainty in Artificial Intelligence},
pages = {52--61},
year = {2015}
}
Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy
An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types
AAAI Conference on Artificial Intelligence, 2015
Abstract | BibTex | arXiv | Appendix
AAAIagent-modellingad-hoc-teamwork
Abstract:
Many multiagent applications require an agent to learn quickly how to interact with previously unknown other agents. To address this problem, researchers have studied learning algorithms which compute posterior beliefs over a hypothesised set of policies, based on the observed actions of the other agents. The posterior belief is complemented by the prior belief, which specifies the subjective likelihood of policies before any actions are observed. In this paper, we present the first comprehensive empirical study on the practical impact of prior beliefs over policies in repeated interactions. We show that prior beliefs can have a significant impact on the long-term performance of such methods, and that the magnitude of the impact depends on the depth of the planning horizon. Moreover, our results demonstrate that automatic methods can be used to compute prior beliefs with consistent performance effects. This indicates that prior beliefs could be eliminated as a manual parameter and instead be computed automatically.
@inproceedings{ albrecht2015empirical,
title = {An Empirical Study on the Practical Impact of Prior Beliefs over Policy Types},
author = {Stefano V. Albrecht and Jacob W. Crandall and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 29th AAAI Conference on Artificial Intelligence},
pages = {1988--1994},
year = {2015}
}
Stefano V. Albrecht, Jacob W. Crandall, Subramanian Ramamoorthy
E-HBA: Using Action Policies for Expert Advice and Agent Typification
AAAI Workshop on Multiagent Interaction without Prior Coordination, 2015
Abstract | BibTex | arXiv | Appendix
AAAIagent-modellingad-hoc-teamwork
Abstract:
Past research has studied two approaches to utilise predefined policy sets in repeated interactions: as experts, to dictate our own actions, and as types, to characterise the behaviour of other agents. In this work, we bring these complementary views together in the form of a novel meta-algorithm, called Expert-HBA (E-HBA), which can be applied to any expert algorithm that considers the average (or total) payoff an expert has yielded in the past. E-HBA gradually mixes the past payoff with a predicted future payoff, which is computed using the type-based characterisation. We present results from a comprehensive set of repeated matrix games, comparing the performance of several well-known expert algorithms with and without the aid of E-HBA. Our results show that E-HBA has the potential to significantly improve the performance of expert algorithms.
@inproceedings{ albrecht2015ehba,
title = {{E-HBA}: Using Action Policies for Expert Advice and Agent Typification},
author = {Stefano V. Albrecht and Jacob W. Crandall and Subramanian Ramamoorthy},
booktitle = {AAAI Workshop on Multiagent Interaction without Prior Coordination},
address = {Austin, Texas, USA},
month = {January},
year = {2015}
}
2014
Stefano V. Albrecht, Subramanian Ramamoorthy
On Convergence and Optimality of Best-Response Learning with Policy Types in Multiagent Systems
Conference on Uncertainty in Artificial Intelligence, 2014
Abstract | BibTex | arXiv | Appendix
UAIagent-modelling
Abstract:
While many multiagent algorithms are designed for homogeneous systems (i.e. all agents are identical), there are important applications which require an agent to coordinate its actions without knowing a priori how the other agents behave. One method to make this problem feasible is to assume that the other agents draw their latent policy (or type) from a specific set, and that a domain expert could provide a specification of this set, albeit only a partially correct one. Algorithms have been proposed by several researchers to compute posterior beliefs over such policy libraries, which can then be used to determine optimal actions. In this paper, we provide theoretical guidance on two central design parameters of this method: Firstly, it is important that the user choose a posterior which can learn the true distribution of latent types, as otherwise suboptimal actions may be chosen. We analyse convergence properties of two existing posterior formulations and propose a new posterior which can learn correlated distributions. Secondly, since the types are provided by an expert, they may be inaccurate in the sense that they do not predict the agents’ observed actions. We provide a novel characterisation of optimality which allows experts to use efficient model checking algorithms to verify optimality of types.
@inproceedings{ albrecht2014convergence,
title = {On Convergence and Optimality of Best-Response Learning with Policy Types in Multiagent Systems},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence},
pages = {12--21},
year = {2014}
}
2013
Stefano V. Albrecht, Subramanian Ramamoorthy
A Game-Theoretic Model and Best-Response Learning Method for Ad Hoc Coordination in Multiagent Systems
International Conference on Autonomous Agents and Multiagent Systems, 2013
Abstract | BibTex | arXiv (full technical report) | Extended Abstract
AAMASad-hoc-teamworkagent-modelling
Abstract:
The ad hoc coordination problem is to design an autonomous agent which is able to achieve optimal flexibility and efficiency in a multiagent system with no mechanisms for prior coordination. We conceptualise this problem formally using a game-theoretic model, called the stochastic Bayesian game, in which the behaviour of a player is determined by its private information, or type. Based on this model, we derive a solution, called Harsanyi-Bellman Ad Hoc Coordination (HBA), which utilises the concept of Bayesian Nash equilibrium in a planning procedure to find optimal actions in the sense of Bellman optimal control. We evaluate HBA in a multiagent logistics domain called level-based foraging, showing that it achieves higher flexibility and efficiency than several alternative algorithms. We also report on a human-machine experiment at a public science exhibition in which the human participants played repeated Prisoner's Dilemma and Rock-Paper-Scissors against HBA and alternative algorithms, showing that HBA achieves equal efficiency and a significantly higher welfare and winning rate.
@inproceedings{ albrecht2013game,
title = {A Game-Theoretic Model and Best-Response Learning Method for Ad Hoc Coordination in Multiagent Systems},
author = {Stefano V. Albrecht and Subramanian Ramamoorthy},
booktitle = {Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems},
address = {St. Paul, Minnesota, USA},
month = {May},
year = {2013}
}