Our Policy for Reproducibility & Transparency in Research

Author: Stefano Albrecht

Date: 2024-03-18

Follow @UoE_Agents on Twitter

The ability to reproduce research results and the transparency of research data are crucial pillars of research. Yet scientists often point out limited reproducibility and transparency in their research areas, and AI and machine learning (ML) research are no exception, with some recent headlines including:

In response to issues with reproducibility in ML research, there have even been dedicated "ML Reproducibility Challenges" running for multiple years, where researchers could submit reports documenting their attempts to reproduce research published at top ML conferences and journals.

To avoid being part of the problem, our research group has adopted action policies to ensure reproducibility and transparency in our research. In summary, every published paper must:

  1. detail the experimental setup to enable reproduction of experiments and reported results (e.g. algorithm and architecture hyper-parameter values, environment parameters, dependencies, etc.);
  2. provide access to implementation code used for the experiments along with documentation and instructions for running experiments;
  3. provide access to datasets and results data reported in the paper along with documentation for how to use the data.

Besides ensuring reproducibility, making implementation code available has important additional benefits. If the code is readily available, then other researchers will more easily be able to (and, thus, more likely) use your algorithms as comparison baselines or to build new algorithms, thus increasing the impact of your work. It also gives a level of "protection" in that other researchers will use a correct implementation of your algorithm rather than a bad/buggy implementation of their own that could lead to incorrect results. And lastly, some people may find it helpful to understand your methods by reading your code in addition to your paper.

Releasing the results data allows other researchers to run hypothesis tests to verify your claims and to analyse your data further. This also includes any static datasets you used to train your models (such as for imitation learning of offline RL). However, uploading such data can be tricky if they require very large amounts of space, which is often the case in ML/RL research. Some universities provide a data sharing service where researchers affiliated with the university can create a data repository and upload large amounts of data (e.g. 100s of gigabytes). For example, we use the DataShare service provided by the University of Edinburgh. This service "freezes" each data repo and gives it a permanent public URL, meaning that users can be certain that the data in the repo will be in its original state and cannot be changed at a later time. There are also other free data sharing services on the web, such as UK Data Service, OpenML, Hugging Face, and others.

The action steps in our reproducibility policy are listed below in the form of a checklist that can be used by researchers in the field.

Give full details of the experimental setup

Upload the code

Upload the data

Further reading

  1. Odd Erik Gundersen, Yolanda Gil, David W. Aha: On Reproducible AI: Towards Reproducible Research, Open Science, and Digital Scholarship in AI Publications. AI Magazine, 2018.