Large distributed infrastructures are rampant in our society. Numerical simulations form the basis of computational sciences and high performance computing infrastructures have become scientific instruments with similar roles as those of test tubes or telescopes. Cloud infrastructures are used by companies in such an intense way that even the shortest outage quickly incurs the loss of several millions of dollars. But every citizen also relies on (and interacts with) such infrastructures via complex wireless mobile embedded devices whose nature is constantly evolving. In this way, the advent of digital miniaturization and interconnection has enabled our homes, power stations, cars and bikes to evolve into smart grids and smart transportation systems that should be optimized to fulfill societal expectations.
Our dependence and intense usage of such gigantic systems obviously leads to very high expectations in terms of performance. Indeed, we strive for low-cost and energy-efficient systems that seamlessly adapt to changing environments that can only be accessed through uncertain measurements. Such digital systems also have to take into account both the users' profile and expectations to efficiently and fairly share resources in an online way. Analyzing, designing and provisioning such systems has thus become a real challenge.
Such systems are characterized by their
ever-growing size,
intrinsic heterogeneity and distributedness,
user-driven requirements,
and an unpredictable variability that renders them essentially stochastic.
In such contexts, many of the former design and analysis
hypotheses (homogeneity, limited hierarchy, omniscient view,
optimization carried out by a single entity, open-loop
optimization, user outside of the picture) have become obsolete, which
calls for radically new approaches. Properly studying such systems
requires a drastic rethinking of fundamental aspects regarding the system's
observation (measure, trace, methodology, design of experiments),
analysis (modeling, simulation, trace analysis and visualization),
and optimization (distributed, online, stochastic).
The goal of the POLARIS project is to contribute to the understanding of the performance of very large scale
distributed systems by applying ideas from diverse research fields and application domains.
We believe that studying all these different aspects at once without restricting to specific systems is the key to push forward our understanding of such challenges and to proposing innovative solutions.
This is why we intend to investigate problems arising from application
domains as varied as large computing systems, wireless networks, smart
grids and transportation systems.
The members of the POLARIS project cover a very wide spectrum of expertise in performance evaluation and models, distributed optimization, and analysis of HPC middleware. Specifically, POLARIS' members have worked extensively on:
AI and Learning is everywhere now. Let us clarify how our research activities are positionned with respect to this trend.
A first line of research in POLARIS is devoted to the use statistical learning techniques (Bayesian inference) to model the expected performance of distributed systems to build aggregated performance views, to feed simulators of such systems, or to detect anomalous behaviours.
In a distributed context it is also essential to design systems that can seamlessly adapt to the workload and to the evolving behaviour of its components (users, resources, network). Obtaining faithful information on the dynamic of the system can be particularly difficult, which is why it is generally more efficient to design systems that dynamically learn the best actions to play through trial and errors. A key characteristic of the work in the POLARIS project is to leverage regularly game-theoretic modeling to handle situations where the resources or the decision is distributed among several agents or even situations where a centralised decision maker has to adapt to strategic users.
An important research direction in POLARIS is thus centered on reinforcement learning (Multi-armed bandits, Q-learning, online learning) and active learning in environments with one or several of the following features:
Feedback is limited (e.g., gradient or even stochastic gradients are not available, which requires for example to resort to stochastic approximations); Multi-agent setting where each agent learns, possibly not in a synchronised way (i.e., decisions may be taken asynchronously, which raises convergence issues); Delayed feedback (avoid oscillations and quantify convergence degradation); Non stochastic (e.g., adversarial) or non stationary workloads (e.g., in presence of shocks); Systems composed of a very large number of entities, that we study through mean field approximation (mean-field games and mean field control). As a side effect, many of the gained insights can often be used to dramatically improve the scalability and the performance of the implementation of more standard machine or deep learning techniques over supercomputers.
The POLARIS members are thus particularly interested in the design and analysis of adaptive learning algorithms for multi-agent systems, i.e. agents that seek to progressively improve their performance on a specific task (see Figure). The resulting algorithms should not only learn an efficient (Nash) equilibrium but they should also be able of doing so quickly (low regret), even when facing the difficulties associated to a distributed context (lack of coordination, uncertain world, information delay, limited feedback, …)
In the rest of this document, we describe in detail our new results in the above areas.
Experiments in large scale distributed systems are costly, difficult to control and therefore difficult to reproduce. Although many of these digital systems have been built by men, they have reached such a complexity level that we are no longer able to study them like artificial systems and have to deal with the same kind of experimental issues as natural sciences. The development of a sound experimental methodology for the evaluation of resource management solutions is among the most important ways to cope with the growing complexity of computing environments. Although computing environments come with their own specific challenges, we believe such general observation problems should be addressed by borrowing good practices and techniques developed in many other domains of science.
This research theme builds on a transverse activity on Open science
and reproducible research and is organized into the following two
directions: (1) Experimental design (2) Smart monitoring and
tracing. As we will explain in more detail hereafter, these transverse
activity and research directions span several research areas and our
goal within the POLARIS project is foremost to transfer original ideas
from other domains of science to the distributed and high performance
computing community.
As explained in the previous section, the first difficulty encountered
when modeling large scale computer systems is to observe these systems
and extract information on the behavior of both the architecture, the
middleware, the applications, and the users. The second difficulty is
to visualize and analyze such multi-level traces to understand how the
performance of the application can be improved. While a lot of efforts
are put into visualizing scientific data, in comparison little effort
have gone into to developing techniques specifically tailored for
understanding the behavior of distributed systems. Many visualization
tools have been developed by renowned HPC groups since decades (e.g.,
BSC 100, Jülich and TU
Dresden 99, 71,
UIUC 88, 103, 91 and
ANL 116, Inria
Bordeaux 76 and
Grenoble 118, ...) but most of these tools
build on the classical information visualization
mantra 108 that consists in always first
presenting an overview of the data, possibly by plotting everything if
computing power allows, and then to allow users to zoom and filter,
providing details on demand. However in our context, the amount of
data comprised in such traces is several orders of magnitude larger
than the number of pixels on a screen and displaying even a small
fraction of the trace leads to harmful visualization
artifacts 95. Such traces are typically
made of events that occur at very different time and space scales,
which unfortunately hinders classical approaches. Such visualization
tools have focused on easing interaction and navigation in the trace
(through gantcharts, intuitive filters, pie charts and kiviats) but
they are very difficult to maintain and evolve and they require some
significant experience to identify performance bottlenecks.
Therefore many groups have more recently proposed in combination to these tools some techniques to help identifying the structure of the application or regions (applicative, spatial or temporal) of interest. For example, researchers from the SDSC 98 propose some segment matching techniques based on clustering (Euclidean or Manhattan distance) of start and end dates of the segments that enables to reduce the amount of information to display. Researchers from the BSC use clustering, linear regression and Kriging techniques 107, 94, 87 to identify and characterize (in term of performance and resource usage) application phases and present aggregated representations of the trace 106. Researchers from Jülich and TU Darmstadt have proposed techniques to identify specific communication patterns that incur wait states 113, 63
Evaluating the scalability, robustness, energy consumption and performance of large infrastructures such as exascale platforms and clouds raises severe methodological challenges. The complexity of such platforms mandates empirical evaluation but direct experimentation via an application deployment on a real-world testbed is often limited by the few platforms available at hand and is even sometimes impossible (cost, access, early stages of the infrastructure design, ...). Unlike direct experimentation via an application deployment on a real-world testbed, simulation enables fully repeatable and configurable experiments that can often be conducted quickly for arbitrary hypothetical scenarios. In spite of these promises, current simulation practice is often not conducive to obtaining scientifically sound results. To date, most simulation results in the parallel and distributed computing literature are obtained with simulators that are ad hoc, unavailable, undocumented, and/or no longer maintained. For instance, Naicken et al. 62 point out that out of 125 recent papers they surveyed that study peer-to-peer systems, 52% use simulation and mention a simulator, but 72% of them use a custom simulator. As a result, most published simulation results build on throw-away (short-lived and non validated) simulators that are specifically designed for a particular study, which prevents other researchers from building upon it. There is thus a strong need for recognized simulation frameworks by which simulation results can be reproduced, further analyzed and improved.
The SimGrid simulation toolkit 74,
whose development is partially supported by POLARIS, is specifically
designed for studying large scale distributed computing systems. It
has already been successfully used for simulation of grid, volunteer
computing, HPC, cloud infrastructures and we have constantly invested
on the software quality, the scalability 66
and the validity of the underlying network
models 64, 111. Many simulators
of MPI applications have been developed by renowned HPC groups (e.g.,
at SDSC 109, BSC 60,
UIUC 117, Sandia Nat. Lab. 112,
ORNL 67 or ETH Zürich 89 for
the most prominent ones). Yet, to scale most of them build on
restrictive network and application modeling assumptions that make
them difficult to extend to more complex architectures and to
applications that do not solely build on the MPI API. Furthermore,
simplistic modeling assumptions generally prevent to faithfully
predict execution times, which limits the use of simulation to
indication of gross trends at best. Our goal is to improve the quality
of SimGrid to the point where it can be used effectively on a daily
basis by practitioners to reproduce the dynamic of real HPC
systems.
We also develop another simulation software, PSI (Perfect
SImulator) 78, 72, dedicated to
the simulation of very large systems that can be modeled as Markov
chains. PSI provides a set of simulation kernels for Markov chains
specified by events. It allows one to sample stationary distributions
through the Perfect Sampling method (pioneered by Propp and
Wilson 101) or simply to generate trajectories with
a forward Monte-Carlo simulation leveraging time parallel simulation
(pioneered by Fujimoto 82, Lin and
Lazowska 93). One of the strength of
the PSI framework is its expressiveness that allows us to
easily study networks with finite and infinite capacity
queues 73. Although PSI already allows to
simulate very large and complex systems, our main objective is to push
its scalability even further and improve its capabilities by one or
several orders of magnitude.
Many systems can be effectively described by stochastic population
models. These systems are composed of a set of curse
of dimensionality: the state-space grows exponentially with
This results in the need for approximation techniques. Mean field
analysis offers a viable, and often very accurate, solution for large
Within the POLARIS project, we will continue developing both
the theory behind these approximation techniques and their
applications. Typically, these techniques require a homogeneous
population of objects where the dynamics of the entities depend only
on their state (the state space of each object must not scale with
Game theory is a thriving interdisciplinary field that studies the interactions between competing optimizing agents, be they humans, firms, bacteria, or computers. As such, game-theoretic models have met with remarkable success when applied to complex systems consisting of interdependent components with vastly different (and often conflicting) objectives – ranging from latency minimization in packet-switched networks to throughput maximization and power control in mobile wireless networks.
In the context of large-scale, decentralized systems (the core focus of the POLARIS project), it is more relevant to take an inductive, “bottom-up” approach to game theory, because the components of a large system cannot be assumed to perform the numerical calculations required to solve a very-large-scale optimization problem.
In view of this, POLARIS' overarching objective in this area is to develop novel algorithmic frameworks that offer robust performance guarantees when employed by all interacting decision-makers.
A key challenge here is that most of the literature on learning in games has focused on static games with a finite number of actions per player 81, 104.
While relatively tractable, such games are ill-suited to practical applications where players pick an action from a continuous space or when their payoff functions evolve over time – this being typically the case in our target applications (e.g., routing in
packet-switched networks or energy-efficient throughput maximization in wireless).
On the other hand, the framework of online convex optimization typically provides worst-case performance bounds on the learner's regret that the agents can attain irrespectively of how their environment varies over time.
However, if the agents' environment is determined chiefly by their interactions these bounds are fairly loose, so more sophisticated convergence criteria should be applied.
From an algorithmic standpoint, a further challenge occurs when players can only observe their own payoffs (or a perturbed version thereof). In this bandit-like setting regret-matching or trial-and-error procedures guarantee convergence to an equilibrium in a weak sense in certain classes of games. However, these results apply exclusively to static, finite games: learning in games with continuous action spaces and/or nonlinear payoff functions cannot be studied within this framework. Furthermore, even in the case of finite games, the complexity of the algorithms described above is not known, so it is impossible to decide a priori which algorithmic scheme can be applied to which application.
Supercomputers typically comprise thousands to millions of multi-core
CPUs with GPU accelerators interconnected by complex interconnection
networks that are typically structured as an intricate hierarchy of
network switches. Capacity planning and management of such systems not
only raises challenges in term of computing efficiency but also in
term of energy consumption. Most legacy (SPMD) applications struggle
to benefit from such infrastructure since the slightest failure or
load imbalance immediately causes the whole program to stop or at best
to waste resources. To scale and handle the stochastic nature of
resources, these applications have to rely on dynamic runtimes that
schedule computations and communications in an opportunistic way. Such
evolution raises challenges not only in terms of programming but also
in terms of observation (complexity and dynamicity prevents experiment
reproducibility, intrusiveness hinders large scale data collection,
...) and analysis (dynamic and flexible application structures make
classical visualization and simulation techniques totally ineffective
and require to build on ad hoc information on the application
structure).
Considerable interest has arisen from the seminal prediction that the use of multiple-input, multiple-output (MIMO) technologies can lead to substantial gains in information throughput in wireless communications, especially when used at a massive level. In particular, by employing multiple inexpensive service antennas, it is possible to exploit spatial multiplexing in the transmission and reception of radio signals, the only physical limit being the number of antennas that can be deployed on a portable device. As a result, the wireless medium can accommodate greater volumes of data traffic without requiring the reallocation (and subsequent re-regulation) of additional frequency bands. In this context, throughput maximization in the presence of interference by neighboring transmitters leads to games with convex action sets (covariance matrices with trace constraints) and individually concave utility functions (each user's Shannon throughput); developing efficient and distributed optimization protocols for such systems is one of the core objectives of Theme 5.
Another major challenge that occurs here is due to the fact that the efficient physical layer optimization of wireless networks relies on perfect (or close to perfect) channel state information (CSI), on both the uplink and the downlink. Due to the vastly increased computational overhead of this feedback – especially in decentralized, small-cell environments – the ongoing transition to fifth generation (5G) wireless networks is expected to go hand-in-hand with distributed learning and optimization methods that can operate reliably in feedback-starved environments. Accordingly, one of POLARIS' application-driven goals will be to leverage the algorithmic output of Theme 5 into a highly adaptive resource allocation framework for next-géneration wireless systems that can effectively "learn in the dark", without requiring crippling amounts of feedback.
Smart urban transport systems and smart grids are two examples of collective adaptive systems. They consist of a large number of heterogeneous entities with decentralised control and varying degrees of complex autonomous behaviour. We develop an analysis tools to help to reason about such systems. Our work relies on tools from fluid and mean-field approximation to build decentralized algorithms that solve complex optimization problems. We focus on two problems: decentralized control of electric grids and capacity planning in vehicle-sharing systems to improve load balancing.
Social computing systems are online digital systems that use personal data of their users at their core to deliver personalized services directly to the users. They are omnipresent and include for instance recommendation systems, social networks, online medias, daily apps, etc. Despite their interest and utility for users, these systems pose critical challenges of privacy, security, transparency, and respect of certain ethical constraints such as fairness. Solving these challenges involves a mix of measurement and/or audit to understand and assess issues, and modeling and optimization to propose and calibrate solutions.
P. Mertikopoulos is a CNRS bronze medal finalist: https://
SimGrid is a toolkit that provides core functionalities for the simulation of distributed applications in heterogeneous distributed environments. The simulation engine uses algorithmic and implementation techniques toward the fast simulation of large systems on a single machine. The models are theoretically grounded and experimentally validated. The results are reproducible, enabling better scientific practices.
Its models of networks, cpus and disks are adapted to (Data)Grids, P2P, Clouds, Clusters and HPC, allowing multi-domain studies. It can be used either to simulate algorithms and prototypes of applications, or to emulate real MPI applications through the virtualization of their communication, or to formally assess algorithms and applications that can run in the framework.
The formal verification module explores all possible message interleavings in the application, searching for states violating the provided properties. We recently added the ability to assess liveness properties over arbitrary and legacy codes, thanks to a system-level introspection tool that provides a finely detailed view of the running application to the model checker. This can for example be leveraged to verify both safety or liveness properties, on arbitrary MPI code written in C/C++/Fortran.
SimGrid is a toolkit that provides core functionalities for the simulation of distributed applications in heterogeneous distributed environments. The simulation engine uses algorithmic and implementation techniques toward the fast simulation of large systems on a single machine. The models are theoretically grounded and experimentally validated. The results are reproducible, enabling better scientific practices.
Its models of networks, cpus and disks are adapted to (Data)Grids, P2P, Clouds, Clusters and HPC, allowing multi-domain studies. It can be used either to simulate algorithms and prototypes of applications, or to emulate real MPI applications through the virtualization of their communication, or to formally assess algorithms and applications that can run in the framework.
The formal verification module explores all possible message interleavings in the application, searching for states violating the provided properties. We recently added the ability to assess liveness properties over arbitrary and legacy codes, thanks to a system-level introspection tool that provides a finely detailed view of the running application to the model checker. This can for example be leveraged to verify both safety or liveness properties, on arbitrary MPI code written in C/C++/Fortran.
Traditional processors have reached architectural limits which heterogeneous multicore designs and hardware specialization (eg. coprocessors, accelerators, ...) intend to address. However, exploiting such machines introduces numerous challenging issues at all levels, ranging from programming models and compilers to the design of scalable hardware solutions. The design of efficient runtime systems for these architectures is a critical issue. StarPU typically makes it much easier for high performance libraries or compiler environments to exploit heterogeneous multicore machines possibly equipped with GPGPUs or Cell processors: rather than handling low-level issues, programmers may concentrate on algorithmic concerns.Portability is obtained by the means of a unified abstraction of the machine. StarPU offers a unified offloadable task abstraction named "codelet". Rather than rewriting the entire code, programmers can encapsulate existing functions within codelets. In case a codelet may run on heterogeneous architectures, it is possible to specify one function for each architectures (eg. one function for CUDA and one function for CPUs). StarPU takes care to schedule and execute those codelets as efficiently as possible over the entire machine. In order to relieve programmers from the burden of explicit data transfers, a high-level data management library enforces memory coherency over the machine: before a codelet starts (eg. on an accelerator), all its data are transparently made available on the compute resource.Given its expressive interface and portable scheduling policies, StarPU obtains portable performances by efficiently (and easily) using all computing resources at the same time. StarPU also takes advantage of the heterogeneous nature of a machine, for instance by using scheduling strategies based on auto-tuned performance models.
StarPU is a task programming library for hybrid architectures
The application provides algorithms and constraints: - CPU/GPU implementations of tasks - A graph of tasks, using either the StarPU's high level GCC plugin pragmas or StarPU's rich C API
StarPU handles run-time concerns - Task dependencies - Optimized heterogeneous scheduling - Optimized data transfers and replication between main memory and discrete memories - Optimized cluster communications
Rather than handling low-level scheduling and optimizing issues, programmers can concentrate on algorithmic concerns!
marmoteCore is a C++ environment for modeling with Markov chains. It consists in a reduced set of high-level abstractions for constructing state spaces, transition structures and Markov chains (discrete-time and continuous-time). It provides the ability of constructing hierarchies of Markov models, from the most general to the particular, and equip each level with specifically optimized solution methods.
This software was started within the ANR MARMOTE project: ANR-12-MONU-00019.
The new results produced by the team in 2020 can be grouped into the following categories; for each new result, see the corresponding reference for further details.
Patrick Loiseau has a Cifre contract with Naver labs (2020-2023) on "Fairness in multi-stakeholder recommendation platforms”, which supports the PhD student Till Kletti.
Bary Pradelski (PI), P. Mertikopoulos and P. Loiseau obtained funding from the ANR for the project ALIAS (Adaptive Learning for Interactive Agents and Systems). This is a bilateral PRCI (collaboration internationale) project joint with Singapore University of Technology and Design (SUTD). The Singapore team consists of G. Piliouras and G. Panageas.
ORACLESS (2016–2021) is an ANR starting grant (JCJC) coordinated by Panayotis Mertikopoulos. The goal of the project is to develop highly adaptive resource allocation methods for wireless communication networks that are provably capable of adapting to unpredictable changes in the network. In particular, the project will focus on the application of online optimization and online learning methodologies to multi-antenna systems and cognitive radio networks.
Nicolas Gast obtained a funding from the ANR for the JCJC project REFINO (Refined Mean Field Optimization). The main objective of this project is to leverage our expertise on mean field and refined mean field approximation to solve distributed optimization problems.
Patrick Loiseau obtained a funding from the ANR for FairPlay, a starting grant (JCJC) obtained in September 2020 (covering the period 2021-2025). The goal of the project is to develop fair algorithms via game theory and sequential learning techniques, in particular for problems of auctions and of matching.
Patrick Loiseau and Panayotis Mertikopoulos have a grant from DGA (2018-2021) that complements the funding of PhD student (Benjamin Roussillon) to work on game theoretic models for adversarial classification.
Projet DISCMAN (projet IRS de l'UGA). DISCMAN (Distributed Control for Multi-Agent systems and Networks) is a joint IRS project funded by IDEX Université Grenoble-Alpes. Its main objectives is to develop distributed equilibrium convergence algorithms for large-scale control and optimization problems, both offline and online. It is being coordinated by P. Mertikopoulos (POLARIS), and it involves a joint team of researchers from the LIG and LJK laboratories in Grenoble.
A. Legrand was scientific chair of the "Performance and Power Modeling, Prediction and Evaluation" track for the EuroPar 2020 conference.
P. Mertikopoulos: Area chair at NeurIPS 2020; Area chair at ICLR 2021 (paper selection in 2020, conference taking place in 2021)
J. Anselmi: IFIP Performance
N. Gast: SIGMETRICS, ICML, ICLR
B. Gaujal: SIGMETRICS
A. Legrand: EuroPar, PRECS
P. Loiseau: NeurIPS, ICML, AAAI, IJCAI, PETS, NetEcon
All members of the team are active reviewers for several international conferences.
P. Mertikopoulos is associate editor for JDG (Journal of Dynamics and Games).
P. Mertikopoulos is associate editor for MCAP (Methodology and Computing in Applied Probability).
P. Mertikopoulos is associate editor for RAIRO Operations Research
N. Gast is associate editor for PEVA (Performance Evaluation) and for Stochastic Models.
P. Loiseau is an associate editor at ACM Transactions on Internet Technology (TOIT)
P. is an associate editor at IEEE Transactions on Big Data (TBD)
All members of the team are active reviewers for several international journals.
A. Legrand: Two invited talks at the JDEV (http://
P. Mertikopoulos:
N. Gast is co-responsible of the Doctoral-School "MSTII" (maths and computer science)
B. Gaujal is a member of the scientific committee of GDR-IM and a member of the council of ‘pole MSTIC’ Grenoble
A. Legrand is responsible for the SRCPR ("Systèmes Répartis, Calcul Parallèle et Réseaux") research axis of the LIG.
A. Legrand is leading the HAC-SPECIS ("High-performance Application and Computers, Studying PErformance and Correctness In Simulation") Inria Project Laboratory.
P. Mertikopoulos is responsible for the "Noeud Est" of the GDR Jeux (RT 2932)
P. Mertikopoulos is the working group coordinator, core group member and management committee (MC) representative for France in the European Network for Game Theory (GAMENET).
P. Loiseau is chair of the steering committee of NetEcon (since 2013)
P. Loiseau is the co-holder (with Marie-Christine Rousset from LIG) of a chair of the 3IA institute MIAI at Grenoble Alpes on “Explainable and Responsible AI”.
Supervision of PhD students and postdocs:
Supervision of M2 Students:
J-M. Vincenti is coordinating all the "Mediation Scientifique" activities for Inria Grenoble Rhône-Alpes.
Bary Pradelski has been particularly active during the COVID-19 pandemic by promoting the Green zoning strategy to exit lockdown.
"Aiming for zero Covid-19: Europe needs to take action" with collective of ca. 30 academics, published in deVolkskrant, El Pais, la Rebubblica, Le Monde, Rzeczpospolita, Sueddeutsche Zeitung.
"Vacunación: igualdad, fraternidad… y eficacia" with Miquel Oliu-Barton, El Mundo (15 December 2020).
"Covid-19 : « Qui vacciner en priorité ? Selon quels critères ? Comment hiérarchiser tout cela ? »" with Miquel Oliu-Barton, Le Monde (21 November 2020).
"Covid-19 : sanctuarisons les « zones vertes » !" with Miquel Oliu-Barton, Les Echos (14 October 2020).
"Más allá de las fronteras nacionales" with Miquel Oliu-Barton, El Pais (17 September 2020).
"Coronavirus : il faut « un plan de reconfinements ciblés réaliste, intelligible et commun »" with Miquel Oliu-Barton, Le Monde (26 August 2020).
"Sauver la saison touristique européenne" with Miquel Oliu-Barton, Le Monde (9 May 2020).
"Conectando las ‘zonas verdes’ de Europa: una propuesta para salvar el turismo" with Miquel Oliu-Barton, El Mundo (6 May 2020).
"Il faut une méthode de déconfinement efficace et sécurisée" with Miquel Oliu-Barton and Luc Attia, Le Monde (27 April 2020).
"Green zones: a mathematical proposal for how to exit from the COVID-19 lockdown" with Miquel Oliu-Barton, The Conversation (17 April 2020).