Decision Making and Modelling Other Agents
Our long-term goal is to create autonomous agents capable of robust goal-directed interaction with other agents, with a particular focus on problems of "ad-hoc" teamwork requiring fast and effective adaptation without opportunities for prior coordination between agents. We develop algorithms enabling an agent to reason about the capabilities, behaviours, and composition of other agents from limited observations, and to use such inferences in combination with reinforcement learning and planning techniques for effective decision making.
Autonomous Driving in Urban Environments
We develop algorithms for autonomous driving in challenging urban environments, enabling autonomous vehicles to make fast, robust and safe decisions by reasoning about the actions and intent of other actors in the environment. Research questions include: how to perform complex state estimation; how to efficiently reason about intent from limited data; how to compute robust plans with specified safety-compliance under conditions of dynamic, uncertain observations and limited compute budget. Work in collaboration with Five AI.
Interpretable Goal-based Prediction and Planning for Autonomous Driving
GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving
Multi-Agent Reinforcement Learning
Decentralised learning of coordinated agent policies and inter-agent communication in multi-agent systems is a long-standing open problem. We tackle this problem by developing algorithms for multi-agent deep reinforcement learning, in which multiple agents learn how to communicate and (inter-)act optimally to achieve a specified goal. While deep RL has enabled scalability to large state spaces, the goal of multi-agent RL is to enable efficient scalability in the number of agents where the joint decision space would otherwise be intractable for centralised approaches.
Quantum-Secure Authentication and Key Agreement
Classical protocols for authentication and key establishment relying on public-key cryptography are known to be vulnerable to quantum computing. We develop a novel, quantum-resistant approach to authentication and key agreement which is based on the complexity of interaction in multi-agent systems, supporting mutual and group authentication as well as forward secrecy. We leverage recent progress in generative adversarial training and multi-agent reinforcement learning to maximise the security of our system against intruders and modelling attacks.