Stefano V. Albrecht

Dr. Stefano V. Albrecht

Lecturer (Assistant Professor) in Artificial Intelligence

Research Group Leader

Personal page

Postdoctoral Researchers

Josiah Hanna

PhD Research Students

Muhammad Arrasy Rahman

Muhammad Arrasy Rahman (entered Sep 2018)

MSc Data Science, University of Edinburgh, 2017; BSc Computer Science, Universitas Indonesia, 2015

Project: Ad Hoc Teamwork in Open Multi-Agent Systems using Graph Neural Networks

Many real-world problems require an agent to achieve specific goals by interacting with other agents, without having predefined coordination protocols with other agents. Prior work on ad hoc teamwork focused on multi-agent systems in which the number of agents is assumed fixed. My project focuses on using Graph Neural Networks (GNNs) to handle interaction data between varying number of agents. We explore the possibility of combining GNNs with Reinforcement Learning techniques to implement agents that can perform well in teams with dynamic composition.

Filippos Christianos

Filippos Christianos (entered Sep 2018)

Diploma in Electronic and Computer Engineering, Technical University of Crete, 2017

Project: Coordinated Exploration in Multi-Agent Deep Reinforcement Learning

In the increasingly large state space encountered in deep reinforcement learning, exploration plays a critical role by narrowing down the search for an optimal policy. In multi-agent settings, the joint action space also grows exponentially, further complicating the search. The use of a partially centralized policy while exploring can coordinate the exploration and more easily locate promising, even decentralized, policies. In this project, we investigate how the coordination of agents in the exploration phase can improve the performance of deep reinforcement learning algorithms.

Georgios Papoudakis

Georgios Papoudakis (entered Sep 2018)

Diploma in Electrical and Computer Engineering, Aristotle University of Thessaloniki, 2017

Project: Modelling in Multi-Agent Systems Using Representation Learning

Multi-agent systems in partially observable environments face many challenging problems which traditional reinforcement learning algorithms fail to address. Agents have to deal with the lack of information about the environment's state and the opponents' beliefs and goals. A promising research direction is to learn models of the other agents to better understand their interactions. This project will investigate representation learning for opponent modelling in order to improve learning in multi-agent systems.

Ibrahim Ahmed

Ibrahim Ahmed (entered Sep 2018)

MS in Computer Science, UC Davis, 2018; BS in Computer Science, UC Davis, 2016

Project: Quantum-Secure Authentication and Key Agreement via Abstract Multi-Agent Interaction

Authentication and key establishment are the foundation for secure communication over computer networks. However, modern protocols which rely on public key cryptography for secure communication are vulnerable to quantum technology–based attacks. My project studies a novel quantum-safe method for authentication and key establishment based on abstract multi-agent interaction. It introduces these fields to multi-agent techniques for optimisation and rational decision-making.

Cillian Brewitt

Cillian Brewitt (entered Jan 2019)

MSc Artificial Intelligence, University of Edinburgh, 2017; BE Electrical and Electronic Engineering, University College Cork, 2016

Project: Interpretable Planning and Prediction for Autonomous Vehicles

Accurately predicting the intentions and actions of other road users and then using this information during motion planning is an important task in the field of autonomous driving. It is desirable for planning and prediction methods to be fast, accurate, interpretable, and verifiable, however current methods fail to achieve all these objectives. During this project novel methods for prediction and planning which satisfy these objectives will be investigated. My current focus is investigating how decision trees can be used for vehicle goal recognition.

Elliot Fosong

Elliot Fosong (entered Sep 2019)

BA & MEng in Engineering, University of Cambridge, 2019

Project: Model Criticism in Multi-Agent Systems

Agents operating in multi-agent systems often need to predict and reason about the behaviour of other agents. Candidate models for this behaviour are informed by observations. It is desirable to provide agents with a way to contemplate the truth and usefulness of such models, which may dictate how confidently the agent should act, or inform exploration strategies in learning. This project will develop a principled model criticism framework and examine the theoretical guarantees provided by this framework.

Lukas Schäfer

Lukas Schäfer (entered Dec 2019) — Personal page

MSc Informatics, University of Edinburgh, 2019; BSc Computer Science, Saarland University, 2018

Project: Collaborative Exploration in Multi-Agent Reinforcement Learning using Intrinsic Curiosity

The challenge of multi-agent reinforcement learning is largely defined by the non-stationarity and credit assignment problem introduced by multiple agents acting concurrently. In order to learn effective behaviour in such environments, efficient exploration techniques beyond simple, randomised policies are required. This project will investigate novel methods with a particular focus on intrinsic rewards as exploration incentives. Such self-assigned rewards serve as additional feedback to motivate guided exploration, which could enable collaborative behaviour in multi-agent systems.

Mhairi Dunion

Mhairi Dunion (entered Sep 2020)

BSc (Hons) Mathematics, University of Edinburgh, 2013

Project: Causality in Deep Reinforcement Learning

A challenge of deep reinforcement learning is that it does not generalise to unseen tasks with the same underlying dynamics because it overfits to the training task. In practice, it is common to train algorithms with random initialisations of all environment variables to maximise the tasks seen during training, which is not pragmatic or sample efficient. This project will investigate novel methods to improve generalisation to unseen tasks by combining causal inference techniques with deep reinforcement learning because causal relationships remain invariant to the change in task.

Shangmin (Shawn) Guo

Shangmin (Shawn) Guo (entered Sep 2020)

MSc Data Science, University of Edinburgh, 2019; BE Computer Science, China National University of Defense Technology, 2014

Project: Emergent Languages in Multi-Agent Systems via Reinforcement Learning and Task Transfer

This project aims to use deep reinforcement learning methodology and iterated learning methods to explore how emergent languages from multi-agent systems could help to transfer knowledge across different environments. The goal is to improve the sample efficiency of state-of-the-art DRL models across different environments and explore the effect of multi-tasks on the evolution of languages. This project could facilitate the emergence of communication protocols that could drastically improve the collaboration performance and efficiency of multi-agent systems in tasks where individuals encounter different environments.

Samuel Garcin

Samuel Garcin (entered Sep 2021)

MEng in Aeronautical Engineering, Imperial College London, 2018

Project: Cooperative Multi-Agent Reinforcement Learning for Sequential Hierarchical Tasks

Future robots are expected to be able to collaborate to solve complex tasks. In the real world, these tasks are often composed of sequences and hierarchies of smaller tasks. A key challenge to multi-agent reinforcement learning frameworks addressing this class of problems is credit assignment, as potentially delayed rewards need to be associated to the correct agent. This project will investigate how task representation learning can be utilised by agents to correctly predict and attribute rewards during training and how it may enable agents to form long-term collaborative plans that take into account the tasks being pursued by other agents.

Balint Gyevnar

Bálint Gyevnár (entered Sep 2021)

MInf Informatics, University of Edinburgh, 2021

Project: Explainable Decision-Making for Autonomous Vehicles through Natural Language Conversations

Popular modern learning algorithms usually produce black-box models whose decisions could be hard to trust, error-prone to debug, tedious to justify, difficult to comprehensively evaluate, and biased. These properties could be prohibitive in safety-critical applications such as autonomous driving. Instead we could use explainable systems that balance the social expectations and cognitive models of their users against the completeness and soundness of their behaviour-explanations with the goal to improve understanding and trust in the systems themselves. The project aims to achieve this goal for autonomous vehicles through integration of interpretable models of decision-making such as IGP2, with natural language processing methods and cognitive models, to create an intelligent, conversational vehicle that provides clear and transparent explanations of its driving behaviour to a human user.