Research

Research in the group is organised into the following strands:

Decision Making and Modelling Other Agents

Decision Making and Modelling Other Agents

Our long-term goal is to create autonomous agents capable of robust goal-directed interaction with other agents, with a particular focus on problems of "ad-hoc" teamwork requiring fast and effective adaptation without opportunities for prior coordination between agents. We develop algorithms enabling an agent to reason about the capabilities, behaviours, and composition of other agents from limited observations, and to use such inference in combination with reinforcement learning and planning techniques for effective decision making.

Recent publications:
Autonomous Agents Modelling Other Agents: A Comprehensive Survey and Open Problems

Autonomous Driving in Urban Environments

Autonomous Driving in Urban Environments

We develop algorithms for autonomous driving in challenging urban environments, enabling autonomous vehicles to make fast, robust and safe decisions by reasoning about the actions and intent of other actors in the environment. Research questions include: how to perform complex state estimation; how to efficiently reason about intent from limited data; how to compute robust plans with specified safety-compliance under conditions of dynamic, uncertain observations and limited compute budget. Work in collaboration with FiveAI.

Multi-Agent Reinforcement Learning

Multi-Agent Reinforcement Learning

Cooperative decentralised learning of optimally-coordinated policies and communication is a long-standing open problem. We tackle this problem by developing algorithms for multi-agent deep reinforcement learning, in which multiple actors learn concurrently how to communicate and (inter-)act optimally to achieve a specified goal. While deep RL enabled scalability to large state spaces, the goal of multi-agent RL is to enable efficient scalability in number of agents where the joint decision space would otherwise be intractable for centralised approaches.

Recent publications:
Dealing with Non-Stationarity in Multi-Agent Deep Reinforcement Learning

Quantum-Safe Authentication and Key Agreement

Quantum-Safe Authentication and Key Agreement

Classical protocols for authentication and key establishment relying on public-key cryptography are known to be vulnerable to quantum computing. We develop a novel, quantum-resistant approach to authentication and key agreement which is based on the complexity of interaction in multi-agent systems, supporting mutual and group authentication as well as forward secrecy. We leverage recent progress in generative adversarial training and multi-agent reinforcement learning to maximise the safety of our system against intruders and modelling attacks.

Recent publications:
Stabilizing Generative Adversarial Network Training: A Survey