About me

I am a PhD Student in the PCOG group at the University of Luxembourg. My current research interests are Multi-Objective Optimization, Reinforcement Learning, and Swarm Applications. I am currently working on the (ADARS) project, under the supervision of Grégoire Danoy. Because I believe in open source AI, I’m also contributing to the Farama Foundation, a non-profit organization aiming at facilitating the development of open-source tools for Reinforcement Learning.

I am convinced that specifying AI problems as single-objective is often not enough since we often make compromises in real-life situations. Thus, I dedicate a substantial part of my time researching on Multi-Objective Reinforcement Learning (MORL).

The consequences of learning multiple behaviours, based on different trade-offs between the objectives.

Before my PhD, I specialized into combinatorial optimization techniques such as constraint programming, local search, meta-heuristics algorithms. I worked a few years in the industry on supply chain optimization, planning problems, and have been exposed to various software engineering challenges.

Aside from the cool things mentionned above, I am into cinema, cycling, and beers (yes, I am Belgian).

Left: the Epuck robots, some of our toys in ADARS. Right: the Crazyflie robots in action.

Most significant publications

  • Multi-Objective Reinforcement Learning Based on Decomposition: A Taxonomy and Framework: JAIR
  • Toolkit for MORL: NeurIPS23

Open source

A toolkit for reliable research in MORL

We wrote a few repositories aiming at helping researchers in reproducing results of existing MORL algorithms as well as facilitating the whole research process by providing clean implementations and examples. By making this public, our hope is to attract even more people to the MORL field and remove boilerplate from the research process. A paper describing the whole toolkit has been published at NeurIPS23.

MO-Gymnasium

MO-Gymnasium is a library containing multiple multi-objective RL environments. These environments are all under a standardized API, allowing to test your algorithms on multiple benchmarks without the need to change your code. Since 2023, MO-Gymnasium has been integrated in the Farama foundation suite, aside to other RL projects such as Gymnasium and PettingZoo.

MORL-baselines

MORL-Baselines is a repository containing multiple MORL algorithms using MO-Gymnasium. We aim to provide clean, reliable and validated implementations as well as tools to help in the development of such algorithms. Features include automated experiments tracking for reproducibility, hyper-parameter optimization, multi-objective metrics, and more.

Open RL Benchmark

OpenRLBenchmark is a comprehensive collection of tracked experiments for RL. It aims to make it easier for RL practitioners to pull and compare all kinds of metrics from reputable RL libraries like Stable-baselines3, Tianshou, CleanRL, and MORL-Baselines of course 😎.

MOMAland

MOMAland is a standard MOMARL API and suite of environments. Basically a multi-agent version of MO-Gymnasium, or a multi-objective version of PettingZoo 🙂. Also integrated into the Farama toolkit.


Paper in progress.

CrazyRL

CrazyRL is a MOMARL library under a multi-objective extension of the PettingZoo API. It allows to learn swarm behaviours in a variety of environments, such as the one shown on the left. We also implemented the full MOMARL loop on GPU using Jax, allowing to train agents 2000x faster than when the environment runs on the CPU. Check the video on YouTube.