EN FR
EN FR
ARGO - 2024

2024Activity reportProject-TeamARGO

RNSR: 202324449E
  • Research center Inria Paris Centre
  • In partnership with:Ecole normale supérieure de Paris
  • Team name: Learning, graphs and distributed optimization
  • Domain:Applied Mathematics, Computation and Simulation
  • Theme:Optimization, machine learning and statistical methods

Keywords

Computer Science and Digital Science

  • A3.4. Machine learning and statistics
  • A3.4.3. Reinforcement learning
  • A3.4.8. Deep learning
  • A3.5. Social networks
  • A3.5.1. Analysis of large graphs
  • A6.1.2. Stochastic Modeling
  • A6.2.3. Probabilistic methods
  • A6.2.6. Optimization
  • A6.4.2. Stochastic control
  • A7.1. Algorithms
  • A7.1.3. Graph algorithms
  • A8.1. Discrete mathematics, combinatorics
  • A8.2. Optimization
  • A8.7. Graph theory
  • A8.8. Network science
  • A8.9. Performance evaluation
  • A9. Artificial intelligence

1 Team members, visitors, external collaborators

Research Scientists

  • Ana Busic [Team leader, INRIA, Researcher, HDR]
  • Elisabetta Cornacchia [INRIA, Starting Research Position, from Sep 2024]
  • Marc Lelarge [INRIA, Senior Researcher, HDR]
  • Laurent Massoulié [INRIA, Senior Researcher, HDR]
  • Sean Meyn [INRIA, Chair, from May 2024 until Jun 2024]
  • Kevin Scaman [INRIA, ISFP]
  • Laurent Viennot [INRIA, Senior Researcher, HDR]

Faculty Members

  • Louise Budzynski [ENS PARIS, Associate Professor, from Sep 2024]
  • Jean-Michel Fourneau [UVSQ, Professor, Delegation]

Post-Doctoral Fellows

  • Ashok Krishnan Komalan Sindhu [INRIA, Post-Doctoral Fellow, from Oct 2024]
  • Batiste Le Bars [INRIA, Post-Doctoral Fellow, until Jul 2024]
  • Constantin Philippenko [INRIA, Post-Doctoral Fellow]
  • Sushil Mahavir Varma [INRIA, Post-Doctoral Fellow, from Sep 2024]

PhD Students

  • Killian Bakong Epoune [INRIA]
  • Claire Bizon Monroc [INRIA]
  • Matthieu Blanke [INRIA, until Oct 2024]
  • Baptiste Corban [IFPEN, from Nov 2024]
  • Romain Cosson [INRIA]
  • Mathieu Even [INRIA, until Aug 2024]
  • Jean Adrien Lagesse [ENS PARIS]
  • Thomas Le Corre [INRIA, from Nov 2024]
  • Thomas Le Corre [ENS PARIS, until Nov 2024]
  • Shu Li [INRIA]
  • Jakob Maier [INRIA]
  • David Robin [INRIA]
  • Jules Sintes [INRIA, from Nov 2024]
  • Amaury Triboulin [INRIA, until Feb 2024]
  • Martin Van Waerebeke [INRIA]
  • Lucas Weber [DGA]

Interns and Apprentices

  • Pierre-Gabriel Berlureau [ENS PARIS, from Oct 2024]
  • Baptiste Corban [IFPEN, Intern, from Apr 2024 until Sep 2024]
  • Louann Coste [INRIA, Intern, from Feb 2024 until Jul 2024]
  • Emile Dailly [INRIA, Intern, from Jul 2024 until Aug 2024]
  • Vianey Darsel [INRIA, Intern, until Jan 2024]
  • Maxime Muhlethaler [ENPC, Intern, from Jun 2024 until Sep 2024]
  • Ganesha Srithar [INRIA, Intern, from Apr 2024 until Sep 2024]
  • Louis Vassaux [ENS PARIS, Intern, from Apr 2024]

Administrative Assistant

  • Marina Kovacic [INRIA]

Visiting Scientists

  • Yu-Zhen Chen [University of Massachusetts Amherst, until Feb 2024]
  • Pierluigi Crescenzi [GSSI, until Jul 2024]

2 Overall objectives

The research activity of ARGO focuses on learning, optimization and control methods for graphs and networks. The challenges we aim to address are:

Determine efficient polynomial-time algorithms for fundamental graph processing tasks such as clustering and graph alignment; advance understanding of the "hard phase" for such graph problems which consists of problem instances with no known polynomial time algorithms while non-polynomial time algorithms are known to exist.

Develop new deep learning architectures. We envision to use graph theory and algorithms to better understand and improve neural networks, either by reducing their size or by enhancing their structure. Message passing is the dominant paradigm in Graphical Neural Networks (GNN) but has fundamental limitations due to its equivalence to the Weisfeiler-Lehman isomorphism test. We will investigate architectures breaking away from the message-passing schemes to develop more expressive GNN architectures.

Develop distributed algorithms for federated learning, achieving optimal performance for: supervised learning of a common model, under constraints of privacy and energy consumption; personalized learning of individual models; unsupervised learning of clustering and mixture models.

Advance the theory of reinforcement learning: by investigating convergence properties and connections with control theory. We also envision to develop new reinforcement learning algorithms for distributed systems.

ARGO is a spin-off of the DYOGENE project-team. DYOGENE was created in 2013 jointly with Département d'Informatique de l'École Normale Supérieure (DIENS).

3 Research program

The research activity of ARGO is structured around three methodological axes:

3.1 Learning and algorithms on graphs

Participants: Louise Budzinski, Ana Bušić, Jean-Michel Fourneau, Marc Lelarge, Laurent Massoulié, Laurent Viennot.

Information-theoretic versus computational limits:

The two basic lines of inquiry in statistical inference have long been: (i) to determine fundamental statistical (i.e., information-theoretic) limits; and (ii) to find efficient algorithms achieving these limits. However, for many structured inference problems, it is not clear if statistical optimality is compatible with efficient computation. Statistically optimal estimators often entail an infeasible exhaustive search. Conversely, for many settings the computationally efficient algorithms we know are statistically suboptimal, requiring higher signal strength or more data than is information-theoretically necessary. This phenomenon suggests that the information-theoretic limit on the signal-to-noise ratio (or the amount of data) for these problems, as studied since the beginning of mathematical statistics, is not the practically relevant benchmark for modern high-dimensional settings. Instead, the practically relevant benchmark is the fundamental statistical limit for computationally efficient algorithms. By now dozens of fundamental high-dimensional statistical estimation problems are conjectured to have different computational and statistical limits. These problems (for example, sparse linear regression or sparse phase retrieval) are ubiquitous in practice and well-studied theoretically, yet the central mysteries remain: What are the fundamental data limits for computationally efficient algorithms? How do we find optimal efficient algorithms? At a more basic level, are these statistical-computational gaps in various problems appearing for a common reason? Is there hope of building a widely applicable theory describing and explaining statistical-computational trade-offs?

Algorithmic approaches and their computational limits:

Of particular interest to us are specific problems of learning on graphs which arise in many situations and hence are strongly motivated by various application scenarios. Identification of new algorithms and characterization of their computational limits can thus have important consequences on various application areas and at the same time advance our global understanding of the above-mentioned discrepancy between statistical and computational limits.

Two examples of graph inference problems important for the team are:

  • Graph clustering, also known as community detection, with a plethora of applications from recommender systems to inference of protein function in protein-protein interaction networks.
  • Graph alignment is an important generic task, relevant for social network data de-anonymization, registration of medical imaging data, automatic machine translation, ...

Our long term objective is to jointly augment our algorithmic toolkit and at the same time deepen our understanding of the so-called hard phase that arises when computational and information-theoretic limits do not match. The algorithms that we shall develop and analyze will at least initially be versions or variations of: Message-passing methods akin to belief propagation; spectral methods; semi-definite programming; Monte-Carlo Markov chain simulation.

Graph algorithms for neural networks:

Neural networks can be represented as weighted graphs. We propose to study their analysis and optimization through the lens of graph algorithms. An important aspect of neural network optimization concerns the reduction of their size to save resources in terms of memory and energy. In particular, a large body of literature concerns pruning techniques, that consist in updating a network through the removal of links. It appears that classical network architectures can often be greatly reduced in size while preserving accuracy with such techniques. However, starting from a rather large network seems necessary for training. Pruning is reminiscent of graph sparsification which consists in summarizing a graph by a smaller one while preserving similar properties. For example, a spanner is a subgraph where distances are preserved up to some stretch factor. Or similarly, a cut sparsifier is a sparser weighted graph such that the weights of cuts are approximately preserved. Such techniques could bring new pruning approaches in the context of neural networks with tunable approximation with respect to size reduction.

Temporal graphs:

Time has an important role in a large number of practical graphs ranging from social to transportation networks. Most prominently, this concerns graphs that are the result of interactions of people or objects such as customer-product purchase graphs. However, their time aspect is often ignored for solving tasks such as community detection or recommendation. In its simplest form, a temporal graph is a multigraph where each edge is labeled with a date.

Over the past decades numerous works have revisited various graph problems and concepts in this temporal context. The most central one is temporal connectivity which is based on defining a temporal path as a path forming a sequence of edges with increasing date labels. Clustering appears as a natural problem to revisit in this context. There are also graphs where the date labels can be chosen while respecting some scheduling constraints. This is the case for public transport networks where edges along a bus trip have to be scheduled one after another but where the starting time of each bus can be tuned for global optimization purposes. This inverse problem of defining a temporal graph from a graph raises interesting structural questions. For example, characterizing the graphs that can be transformed into a fully temporally connected temporal graph while having a minimum number of edges was a challenging question raised by the problem of all-to-all gossiping. Other problems such as maximizing flows or finding temporal matchings could lead to new interesting structural characterizations.

Our current efforts in this area are outlined in Section 7.1.

3.2 Deep Learning on structured data and new architectures for learning

Participants: Marc Lelarge, Kevin Scaman.

Typical machine learning algorithms operating on structured data require representations of the often symbolic data as numerical vectors. Vector representations of the data range from handcrafted feature vectors via automatically constructed graph kernels to learned representations, either computed by dedicated embedding algorithms or implicitly computed by learning architectures like graph neural networks. The performance of machine learning methods crucially depends on the quality of the vector representations. Given the importance of the topic, there is surprisingly little theoretical work on vector embeddings, especially when it comes to representing structural information that goes beyond metric information (that is, distances in a graph). The objects we want to embed either are graphs, possibly labelled or weighted, or more generally relational structures, or they are nodes of a (presumably large) graph or more generally elements or tuples appearing in a relational structure. When we embed entire graphs or structures, we speak of graph embeddings or relational structure embeddings; when we embed only nodes or elements we speak of node embeddings. The key theoretical questions we will ask about vector embeddings of objects are the following:

Expressivity: Which properties of objects are represented by the embedding? What is the meaning of the induced distance measure? Are there geometric properties of the latent space that represent meaningful relations on the objects?

Complexity: What is the computational cost of computing the vector embedding? What are efficient embedding algorithms? How can we efficiently retrieve semantic information of the embedded data, for example, answer queries? or solve combinatorial problems?

Geometric Deep Learning:

Recently, geometric deep learning emerged as an attempt for geometric unification of a broad class of machine learning problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases.

Deep learning has brought in the past decade a revolution in data science and made possible many tasks previously thought to be beyond reach — whether computer vision, speech recognition, natural language translation, or playing Go. But we now have a zoo of different neural network architectures for different kinds of data and it is difficult to understand the relations between different methods. Geometric Deep Learning serves two purposes: first, to provide a common mathematical framework and unifying principles to derive the most successful neural network architectures, and second, give a constructive procedure to build future architectures in a principled way.

We consider high-dimensional problems with an additional structure that comes from the geometry of the input signal and explore ways to incorporate this geometric structure into the learning algorithms.

Optimization for Geometric Deep Learning:

Optimization plays a key role in the training of neural networks, and the success or failure of a given architecture is in a large part driven by its amenability to gradient-based optimization methods. In this respect, structured data such as graphs or spatio-temporal time series pose new challenges, as the complexity and structure of the dataset impacts the loss landscape of the training objective in non-trivial, and sometimes detrimental, ways. Graphs are indeed high-dimensional objects, and possess a large variety of characteristics, ranging from local –such as node degree, neighborhood sizes or number of triangles– to global –such as connectivity, the presence of clusters or communities–. Efficient neural network architectures should be able to correctly identify the impact of such characteristics on the desired output, and thus be sufficiently expressive to encode all these characteristics. As a result, the parameters of the deep learning architecture are supposed to encode these patterns, structures and invariances, and the optimization algorithm used during training should be able to detect them in the data. Understanding when gradient descent can or cannot find a given pattern or invariance in the dataset can thus help deep learning practitioners know which architectures to use in which circumstances, and alleviate some of the issues related to the lack of transparency of neural networks by better describing the patterns that a given architecture is able to learn.

Our objective will be to better understand the expressive capabilities of structured deep learning architectures such as graph neural networks by investigating the relationships between their structure and their optimization.

Our current efforts in this area are outlined in Section 7.2.

3.3 Distributed optimisation and control

Participants: Ana Bušić, Sean Meyn, Jean-Michel Fourneau Lecorre, Kevin Scaman, Laurent Massoulié.

An important trend is to consider distributed optimization and control settings, fuelled among others, by the growth of cloud computing, the proliferation of mobile devices, and increasing integration of distributed energy resources in power grids.

Federated Learning and Distributed Optimization:

Federated learning is relevant to situations where learning tasks are to be performed on a dataset that cannot be centralized at one location, either for reasons of storage/communication resources, or for privacy reasons.

Many distinct learning scenarios can be envisioned in this framework, featuring a variety of objectives and constraints. The team has obtained state-of-the-art results for the supervised learning scenario where all agents involved seek to train a common model on the union of their individual datasets (Scaman et al. 2017, Scaman et al. 2018). Besides supervised learning of a common model, we will also tackle so-called personalized learning, whereby individual agents seek a model tailored to their individual dataset, yet expect to improve their learning experience from collaboration between one another, and unsupervised learning.

Reinforcement learning:

Along with the sharp increase in visibility of the field, the rate at which new reinforcement learning algorithms are being proposed is at a new peak. While the surge in activity is creating excitement and opportunities, there is a gap in understanding of basic principles that these algorithms need to satisfy for successful application. We will concentrate our efforts on (i) algorithm design and convergence properties, (ii) exploiting partial knowledge of the system dynamics and/or optimal policy, (iii) connections between control and reinforcement learning.

Distributed control and multi-agent RL:

Motivated by the needs of the modern power networks, we investigate distributed control approaches for large populations of agents, using controlled Markov decision processes and mean-field control.

In multi-agent reinforcement learning, our focus is on algorithms that take into account specific network dependence structure between agents.

Our current efforts in this area are outlined in Section 7.3.

4 Application domains

Our main applications are social networks, energy networks, and large language model based code assistants.

5 Highlights of the year

5.1 Plenary talks

L. Massoulié gave a plenary talk at Allerton conference.

6 New software, platforms, open data

6.1 New software

6.1.1 Farm2Python

  • Name:
    Interfacing tool for wind farm control research
  • Keywords:
    Wind farm, Python
  • Functional Description:
    Python interfacing tool to wind farm simulators. Makes evaluating and comparing wind farm control algorithms easier for engineers and researchers looking to optimize energy production.
  • Contact:
    Claire Bizon Monroc

6.1.2 WFCRL

  • Name:
    Wind Farm Control Reinforcement Learning
  • Keywords:
    Wind farm, Reinforcement learning, Benchmarking, Python
  • Functional Description:
    Python benchmark library to evaluate reinforcement learning algorithms on wind farm control. Makes evaluating and comparing wind farm control algorithms easier for engineers and researchers looking to optimize energy production.
  • Contact:
    Claire Bizon Monroc

7 New results

Participants: All ARGO.

7.1 Learning and algorithms on graphs

7.1.1 High-dimensional statistical inference

Correlation detection in trees for planted graph alignment.

In 5, motivated by alignment of correlated sparse random graphs, we introduce a hypothesis testing problem of deciding whether or not two random trees are correlated. We study the likelihood ratio test and obtain sufficient conditions under which this task is impossible or feasible. We propose MPAlign, a message-passing algorithm for graph alignment inspired by the tree correlation detection problem. We prove MPAlign to succeed in polynomial time at partial alignment whenever tree detection is feasible. As a result our analysis of tree detection reveals new ranges of parameters for which partial alignment of sparse random graphs is feasible in polynomial time.

Statistical limits of correlation detection in trees.

In 6, we address the problem of testing whether two observed trees (t,t') are sampled either independently or from a joint distribution under which they are correlated. This problem, which we refer to as correlation detection in trees, plays a key role in the study of graph alignment for two correlated random graphs. Motivated by graph alignment, we investigate the conditions of existence of one-sided tests, i.e. tests which have vanishing type I error and non-vanishing power in the limit of large tree depth.

For the correlated Galton-Watson model with Poisson offspring of mean λ>0 and correlation parameter s(0,1), we identify a phase transition in the limit of large degrees at s=α, where α0.3383 is Otter's constant. Namely, we prove that no such test exists for sα, and that such a test exists whenever s>α, for λ large enough. This result sheds new light on the graph alignment problem in the sparse regime (with O(1) average node degrees) and on the performance of the MPAlign method studied in [13, 20], proving in particular the conjecture of [20] that MPAlign succeeds in the partial recovery task for correlation parameter s>α provided the average node degree λ is large enough. As a byproduct, we identify a new family of orthogonal polynomials for the Poisson-Galton-Watson measure which enjoy remarkable properties. These polynomials may be of independent interest for a variety of problems involving graphs, trees or branching processes, beyond the scope of graph alignment.

Aligning Embeddings and Geometric Random Graphs: Informational Results and Computational Approaches for the Procrustes-Wasserstein Problem.

The Procrustes-Wasserstein problem consists in matching two high-dimensional point clouds in an unsupervised setting, and has many applications in natural language processing and computer vision. In 20, we consider a planted model with two datasets X, Y that consist of n datapoints in Rd , where Y is a noisy version of X, up to an orthogonal transformation and a relabeling of the data points.This setting is related to the graph alignment problem in geometric models. In this work, we focus on the euclidean transport cost between the point clouds as a measure of performance for the alignment. We first establish information-theoretic results, in the high (d>>logn) and low (d<<logn) dimensional regimes. We then study computational aspects and propose the 'Ping-Pong algorithm', alternatively estimating the orthogonal transformation and the relabeling, initialized via a Franke-Wolfe convex relaxation. We give sufficient conditions for the method to retrieve the planted signal after one single step. We provide experimental results to compare the proposed approach with the state-of-the-art method of Grave et al. [2019].

7.1.2 Graph algorithms and temporal graphs

Collective Tree Exploration via Potential Function Method.

In 17, we study the problem of collective tree exploration (CTE) in which a team of k agents is tasked to traverse all the edges of an unknown tree as fast as possible, assuming complete communication between the agents [14 ]. In this paper, we present an algorithm performing collective tree exploration in 2n/k + O(kD) rounds, where n is the number of nodes in the tree, and D is the tree depth. This leads to a competitive ratio of O(k), the first polynomial improvement over the O(k) ratio of depth-first search. Our analysis holds for an asynchronous generalization of collective tree exploration. It relies on a game with robots at the leaves of a continuously growing tree extending the "tree-mining game" of 15 and resembling the "evolving tree game" of [Bubeck et al, 2022]. Another surprising consequence of our results is the existence of algorithms {Ak}kN for layered tree traversal (LTT) with cost at most 2L/k+O(kD), where L is the sum of all edge lengths. For the case of layered trees of width w and unit edge lengths, our guarantee is thus in O(wD).

Unweighted Layered Graph Traversal: Passing a Crown via Entropy Maximization.

Introduced by Papadimitriou and Yannakakis in 1989, layered graph traversal is a central problem in online algorithms and mobile computing that has been studied for several decades, and which now is essentially resolved in its original formulation. In 11, we demonstrate that what appears to be an innocuous modification of the problem actually leads to a drastic (exponential) reduction of the competitive ratio. Specifically, we present an algorithm that is O(log2 w)-competitive for traversing unweighted layered graphs of width w. Our algorithm chooses the agent's position simply according to the probability distribution over the current layer that maximizes the sum of entropies of the induced distributions in the preceding layers.

Certificates in P and Subquadratic-Time Computation of Radius, Diameter, and all Eccentricities in Graphs.

In the context of fine-grained complexity, in 19 we investigate the notion of certificate enabling faster polynomial-time algorithms. We specifically target radius (minimum eccentricity), diameter (maximum eccentricity), and all-eccentricity computations for which quadratic-time lower bounds are known under plausible conjectures. In each case, we introduce a notion of certificate as a specific set of nodes from which appropriate bounds on all eccentricities can be derived in subquadratic time when this set has sublinear size. The existence of small certificates is a barrier against SETH-based lower bounds for these problems. We indeed prove that for graph classes with small certificates, there exist randomized subquadratic-time algorithms for computing the radius, the diameter, and all eccentricities respectively. Moreover, these notions of certificates are tightly related to algorithms probing the graph through one-to-all distance queries and allow to explain the efficiency of practical radius and diameter algorithms from the literature. Our formalization enables a novel primal-dual analysis of a classical approach for diameter computation that leads to algorithms for radius, diameter and all eccentricities with theoretical guarantees with respect to certain graph parameters. This is complemented by experimental results on various types of real-world graphs showing that these parameters appear to be low in practice. Finally, we obtain refined results for several graph classes.

Temporalizing Digraphs via Linear-Size Balanced Bi-Trees.

In a directed graph D on vertex set v1,,vn, a forward arc is an arc vivj where i<j. A pair vi,vj is forward connected if there is a directed path from vi to vj consisting of forward arcs. In the Forward Connected Pairs Problem (FCPP), the input is a strongly connected digraph D, and the output is the maximum number of forward connected pairs in some vertex enumeration of D. In 12, we show that FCPP is in APX, as one can efficiently enumerate the vertices of D in order to achieve a quadratic number of forward connected pairs. For this, we construct a linear size balanced bi-tree T (an out-branching and an in-branching with same size and same root which are vertex disjoint in the sense that they share no vertex apart from their common root). The existence of such a T was left as an open problem (Brunelli, Crescenzi, Viennot, Networks 2023) motivated by the study of temporal paths in temporal networks. More precisely, T can be constructed in quadratic time (in the number of vertices) and has size at least n/3. The algorithm involves a particular depth-first search tree (Left-DFS) of independent interest, and shows that every strongly connected directed graph has a balanced separator which is a circuit. Remarkably, in the request version RFCPP of FCPP, where the input is a strong digraph D and a set of requests R consisting of pairs {xi,yi}, there is no constant c > 0 such that one can always find an enumeration realizing c.|R| forward connected pairs {xi,yi} (in either direction).

Practical Computation of Graph VC-Dimension.

For any set system H=(V,R),RV, a subset SV is called shattered if every S'S results from the intersection of S with some set in R. The VC-dimension of H is the size of a largest shattered set in V. In 18, we focus on the problem of computing the VC-dimension of graphs. In particular, given a graph G=(V,E), the VC-dimension of G is defined as the VC-dimension of (V,N), where N contains each subset of V that can be obtained as the closed neighborhood of some vertex vV in G. Our main contribution is an algorithm for computing the VC-dimension of any graph, whose effectiveness is shown through experiments on various types of practical graphs, including graphs with millions of vertices. A key aspect of its efficiency resides in the fact that practical graphs have small VC-dimension, up to 8 in our experiments. As a side-product, we present several new bounds relating the graph VC-dimension to other classical graph theoretical notions. We also establish the W[1]-hardness of the graph VC-dimension problem by extending a previous result for arbitrary set systems.

Forbidden Patterns in Temporal Graphs Resulting from Encounters in a Corridor.

In 4, we study temporal graphs arising from mobility models, where vertices correspond to agents moving in space and edges appear each time two agents meet. We propose a rather natural one-dimensional model. If each pair of agents meets exactly once, we get a simple temporal clique where the edges are ordered according to meeting times. In order to characterize which temporal cliques can be obtained as such ‘mobility graphs’, we introduce the notion of forbidden patterns in temporal graphs. Furthermore, using a classical result in combinatorics, we count the number of such mobility cliques for a given number of agents, and show that not every temporal clique resulting from the 1D model can be realized with agents moving with different constant speeds. For the analogous circular problem, where agents are moving along a circle, we provide a characterization via circular forbidden patterns. Our characterization in terms of forbidden patterns can be extended to the case where each edge appears at most once. We also study the problem where pairs of agents are allowed to cross each other several times, using an approach from automata theory. We observe that in this case, there is no finite set of forbidden patterns that characterize such temporal graphs and nevertheless give a linear-time algorithm to recognize temporal graphs arising from this model.

Making Temporal Betweenness Computation Faster and Restless.

Buß et al [KDD 2020] recently proved that the problem of computing the betweenness of all nodes of a temporal graph is computationally hard in the case of foremost and fastest paths, while it is solvable in time O(n 3 T 2 ) in the case of shortest and shortest foremost paths, where n is the number of nodes and T is the number of distinct time steps. A new algorithm for temporal betweenness computation is introduced in 14. In the case of shortest and shortest foremost paths, it requires O(n + M ) space and runs in time O(nM)=O(n3T) where M is the number of temporal edges, thus significantly improving the algorithm of Buß et al in terms of time complexity (note that T is usually large). Experimental evidence is provided that our algorithm performs between twice and almost 250 times better than the algorithm of Buß et al. Moreover, we were able to compute the exact temporal betweenness values of several large temporal graphs with over a million of temporal edges. For such size, only approximate computation was possible by using the algorithm of Santoro and Sarpe [WWW 2022]. Maybe more importantly, our algorithm extends to the case of restless walks (that is, walks with waiting constraints in each node), thus providing a polynomial-time algorithm (with complexity O(nM )) for computing the temporal betweenness in the case of several different optimality criteria. Such restless computation was known only for the shortest criterion (Rymar et al [JGAA 2023]), with complexity O(n2MT2). We performed an extensive experimental validation by comparing different waiting constraints and different optimisation criteria. Moreover, as a case study, we investigate six public transit networks including Berlin, Rome, and Paris. Overall we find a general consistency between the different variants of betweenness centrality. However, we do measure a sensible influence of waiting constraints, and note some cases of low correlation for certain pairs of criteria in some networks.

On the Complexity of Computing a Fastest Temporal Path in Interval Temporal Graphs.

Temporal graphs arise when modeling interactions that evolve over time. They usually come in several flavors, depending on the number of parameters used to describe the temporal aspects of the interactions: time of appearance, duration, delay of transmission. In the point model, edges appear at specific points in time, while in the more general interval model, edges can be present over multiple time intervals. In both models, the delay for traversing an edge can change with each edge appearance. When time is discrete, the two models are equivalent in the sense that the presence of an edge during an interval is equivalent to a sequence of point-in-time occurrences of the edge. However, this transformation can drastically change the size of the input and has complexity issues. Indeed, in 37, show a gap between the two models with respect to the complexity of the classical problem of computing a fastest temporal path from a source vertex to a target vertex, i.e. a path where edges can be traversed one after another in time and such that the total duration from source to target is minimized. It can be solved in near-linear time in the point model, while we show that the interval model requires quadratic time under classical assumptions of fine-grained complexity. With respect to linear time, our lower bound implies a factor of the number of vertices, while the best known algorithm has a factor of the number of underlying edges. Interestingly, we show that near-linear time is possible in the interval model when restricted to all delays being zero, i.e. traversing an edge is instantaneous.

7.1.3 Stochastic matching and queueing networks

On the sub-additivity of stochastic matching.

In 7, we consider a stochastic matching model with a general compatibility graph. We prove that most common matching policies (including FCFM, priorities and random) satisfy a particular sub-additive property, which we exploit to show in many cases, the coupling-from-the-past to the steady state, using a backwards scheme à la Loynes. We then use these results to explicitly construct perfect bi-infinite matchings, and to build a perfect simulation algorithm in the case where the buffer of the system is finite.

Performance Paradox of Dynamic Matching Models under Greedy Policies.

In 1, we consider the stochastic matching model on a non-bipartite compatibility graph and analyze the impact of adding an edge to the expected number of items in the system. One may see adding an edge as increasing the flexibility of the system, for example asking a family registering for social housing to list fewer requirements in order to be compatible with more housing units. Therefore it may be natural to think that adding edges to the compatibility graph will lead to a decrease in the expected number of items in the system and the waiting time to be assigned. In our previous work, we proved this is not always true for the First Come First Matched discipline and provided sufficient conditions for the existence of the performance paradox: despite a new edge in the compatibility graph, the expected total number of items can increase. These sufficient conditions are related to the heavy-traffic assumptions in queueing systems. The intuition behind this is that the performance paradox occurs when the added edge in the compatibility graph disrupts the draining of a bottleneck. In this paper, we generalize this performance paradox result to a family of so called greedy matching policies and explore the type of compatibility graphs where such a paradox occurs. Intuitively, a greedy matching policy never leaves compatible items unassigned, so the state space of the system consists of finite words of item classes that belong to an independent set of the compatibility graph. Some examples of greedy matching policies are First Come First Match, Match the Longest, Match the Shortest, Random, Priority. We prove several results about the existence of performance paradoxes for greedy disciplines for some family of graphs. More precisely, we prove several results about the lifting of the paradox from one graph to another one. For a certain family of graphs we prove that there exists a paradox for the whole family of greedy policies. Most of these results are based on Strong Aggregation of Markov chains and graph theoretical properties.

Strong Aggregation in the Stochastic Matching Model with Random Discipline.

In 21, we consider a stochastic matching model with a general compatibility graph with self-loops on every node and a random matching policy. We consider the discrete time Markov chain associated with such a model where arrivals of items are independently and identically distributed. Due to the self-loops in the compatibility graph, the states of this chain are exactly the independent sets of the graph. We prove that this chain is ordinary lumpable if the automorphism group of the compatibility graph is non-trivial. Additionally, we demonstrate how to construct the partition associated with strong aggregation based on certain subgroups of the automorphism group. This approach can efficiently reduce the size of the state space which could be as large as the exponential of the number of nodes in the compatibility graph before the aggregation. Finally, we illustrate this methodology with examples based on simple compatibility graphs, such as rings and the group of rotations.

Dynamic load balancing in energy packet networks.

Energy Packet Networks (EPNs) model the interaction between renewable sources generating energy following a random process and communication devices that consume energy. This network is formed by cells and, in each cell, there is a queue that handles energy packets and another queue that handles data packets. In 3, we assume Poisson arrivals of energy packets and of data packets to all the cells and exponential service times. We consider an EPN model with a dynamic load balancing where a cell without data packets can poll other cells to migrate jobs. This migration can only take place when there is enough energy in both interacting cells, in which case a batch of data packets is transferred and the required energy is consumed (i.e. it disappears). We consider that data packet also consume energy to be routed to the next station. Our main result shows that the steady-state distribution of jobs in the queues admits a product form solution provided that a stable solution of a fixed point equation exists. We prove sufficient conditions for irreducibility. Under these conditions and when the fixed point equation has a solution, the Markov chain is ergodic. We also provide sufficient conditions for the existence of a solution of the fixed point equation. We then focus on layered networks and we study the polling rates that must be set to achieve a fair load balancing, i.e., such that, in the same layer, the load of the queues handling data packets is the same. Our numerical experiments illustrate that dynamic load balancing satisfies several interesting properties such as performance improvement or fair load balancing.

7.2 Deep Learning on structured data and new architectures for learning

NLIR: Natural Language Intermediate Representation for Mechanized Theorem Proving.

Formal theorem proving is challenging for humans as well as for machines. Thanks to recent advances in LLM capabilities, we believe natural language can serve as a universal interface for reasoning about formal proofs. In 30 paper, 1) we introduce Pétanque, a new lightweight environment to interact with the Coq theorem prover; 2) we present two interactive proof protocols leveraging natural language as an intermediate representation for designing proof steps; 3) we implement beam search over these interaction protocols, using natural language to rerank proof candidates; and 4) we use Pétanque to benchmark our search algorithms. Using our method with GPT-4o we can successfully synthesize proofs for 58% of the first 100/260 lemmas from the newly published Busy Beaver proofs.

Neural Incremental Data Assimilation.

Data assimilation is a central problem in many geophysical applications, such as weather forecasting. It aims to estimate the state of a potentially large system, such as the atmosphere, from sparse observations, supplemented by prior physical knowledge. The size of the systems involved and the complexity of the underlying physical equations make it a challenging task from a computational point of view. Neural networks represent a promising method of emulating the physics at low cost, and therefore have the potential to considerably improve and accelerate data assimilation. In 38, we introduce a deep learning approach where the physical system is modeled as a sequence of coarse-to-fine Gaussian prior distributions parametrized by a neural network. This allows us to define an assimilation operator, which is trained in an end-to-end fashion to minimize the reconstruction error on a dataset with different observation processes. We illustrate our approach on chaotic dynamical physical systems with sparse observations, and compare it to traditional variational data assimilation methods.

Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks.

In 27, we present a framework to define a large class of neural networks for which, by construction, training by gradient flow provably reaches arbitrarily low loss when the number of parameters grows. Distinct from the fixed-space global optimality of non-convex optimization, this new form of convergence, and the techniques introduced to prove such convergence, pave the way for a usable deep learning convergence theory in the near future, without overparameterization assumptions relating the number of parameters and training samples. We define these architectures from a simple computation graph and a mechanism to lift it, thus increasing the number of parameters, generalizing the idea of increasing the widths of multi-layer perceptrons. We show that architectures similar to most common deep learning models are present in this class, obtained by sparsifying the weight tensors of usual architectures at initialization. Leveraging tools of algebraic topology and random graph theory, we use the computation graph's geometry to propagate properties guaranteeing convergence to any precision for these large sparse models.

Interpretable Meta-Learning of Physical Systems

Machine learning methods can be a valuable aid in the scientific process, but they need to face challenging settings where data come from inhomogeneous experimental conditions. Recent meta-learning methods have made significant progress in multi-task learning, but they rely on black-box neural networks, resulting in high computational costs and limited interpretability. In 13, we introduce CAMEL, a new meta-learning architecture capable of learning efficiently from multiple environments, with an affine structure with respect to the learning task. We prove that CAMEL can identify the physical parameters of the system, enabling interpreable learning. We demonstrate the competitive generalization performance and the low computational cost of our method by comparing it to state-of-the-art algorithms on physical systems, ranging from toy models to complex, non-analytical systems. The interpretability of our method is illustrated with original applications to parameter identification and to adaptive control and system identification.

7.3 Distributed optimization and control

7.3.1 Federated learning and decentralized optimization

Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm.

The paper 23 presents a new generalization error analysis for Decentralized Stochastic Gradient Descent (D-SGD) based on algorithmic stability. The obtained results overhaul a series of recent works that suggested an increased instability due to decentralization and a detrimental impact of poorly-connected communication graphs on generalization. On the contrary, we show, for convex, strongly convex and non-convex functions, that D-SGD can always recover generalization bounds analogous to those of classical SGD, suggesting that the choice of graph does not matter. We then argue that this result is coming from a worst-case analysis, and we provide a refined optimization-dependent generalization bound for general convex functions. This new bound reveals that the choice of graph can in fact improve the worst-case bound in certain regimes, and that surprisingly, a poorly-connected graph can even be beneficial for generalization.

SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization.

Machine Unlearning (MU) is an increasingly important topic in machine learning safety, aiming at removing the contribution of a given data point from a training procedure. Federated Unlearning (FU) consists in extending MU to unlearn a given client's contribution from a federated training routine. While several FU methods have been proposed, we currently lack a general approach providing formal unlearning guarantees to the FedAvg routine, while ensuring scalability and generalization beyond the convex assumption on the clients' loss functions. In 22, we aim at filling this gap by proposing SIFU (Sequential Informed Federated Unlearning), a new FU method applying to both convex and non-convex optimization regimes. SIFU naturally applies to FedAvg without additional computational cost for the clients and provides formal guarantees on the quality of the unlearning task. We provide a theoretical analysis of the unlearning properties of SIFU, and practically demonstrate its effectiveness as compared to a panel of unlearning methods from the state-of-the-art.

In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting.

In 26, we analyze a distributed algorithm to compute a low-rank matrix factorization on N clients, each holding a local dataset SiRni×d, mathematically, we seek to solve minUiRni×r,VRd×r1/2i=1N||Si-UiVT||F2. Considering a power initialization of V, we rewrite the previous smooth non-convex problem into a smooth strongly-convex problem that we solve using a parallel Nesterov gradient descent potentially requiring a single step of communication at the initialization step. For any client i in {1,,N}, we obtain a global V in Rd×r common to all clients and a local variable Ui in Rni×r. We provide a linear rate of convergence of the excess loss which depends on σmax/σr, where σr is the rth singular value of the concatenation S of the matrices (Si)i=1N. This result improves the rates of convergence given in the literature, which depend on σmax2/σmin2. We provide an upper bound on the Frobenius-norm error of reconstruction under the power initialization strategy. We complete our analysis with experiments on both synthetic and real data.

Minimax Excess Risk of First-Order Methods for Statistical Learning with Data-Dependent Oracles.

In 28, our aim is to analyse the generalization capabilities of first-order methods for statistical learning in multiple, different yet related, scenarios including supervised learning, transfer learning, robust learning and federated learning. To do so, we provide sharp upper and lower bounds for the minimax excess risk of strongly convex and smooth statistical learning when the gradient is accessed through partial observations given by a data-dependent oracle. This novel class of oracles can query the gradient with any given data distribution, and is thus well suited to scenarios in which the training data distribution does not match the target (or test) distribution. In particular, our upper and lower bounds are proportional to the smallest mean square error achievable by gradient estimators, thus allowing us to easily derive multiple sharp bounds in the aforementioned scenarios using the extensive literature on parameter estimation.

Marginal and training-conditional guarantees in one-shot federated conformal prediction.

In 39, we study conformal prediction in the one-shot federated learning setting. The main goal is to compute marginally and training-conditionally valid prediction sets, at the server-level, in only one round of communication between the agents and the server. Using the quantile-of-quantiles family of estimators and split conformal prediction, we introduce a collection of computationally-efficient and distribution-free algorithms that satisfy the aforementioned requirements. Our approaches come from theoretical results related to order statistics and the analysis of the Beta-Beta distribution. We also prove upper bounds on the coverage of all proposed algorithms when the nonconformity scores are almost surely distinct. For algorithms with training-conditional guarantees, these bounds are of the same order of magnitude as those of the centralized case. Remarkably, this implies that the one-shot federated learning setting entails no significant loss compared to the centralized case. Our experiments confirm that our algorithms return prediction sets with coverage and length similar to those obtained in a centralized setting.

7.3.2 Online learning

Barely Random Algorithms and Collective Metrical Task Systems.

In 16, we consider metrical task systems on general metric spaces with n points, and show that any fully randomized algorithm can be turned into a randomized algorithm that uses only 2logn random bits, and achieves the same competitive ratio up to a factor 2. This provides the first order-optimal barely random algorithms for metrical task systems, i.e., which use a number of random bits that does not depend on the number of requests addressed to the system. We discuss implications on various aspects of online decision-making such as: distributed systems, advice complexity, and transaction costs, suggesting broad applicability. We put forward an equivalent view that we call collective metrical task systems where k agents in a metrical task system team up, and suffer the average cost paid by each agent. Our results imply that such a team can be O(log2n)-competitive as soon as kn2. In comparison, a single agent is always Ω(n)-competitive.

7.3.3 Markov decision processes and reinforcement learning

Reinforcement learning and regret bounds for admission control.

The expected regret of any reinforcement learning algorithm is lower bounded by ΩDXAT for undiscounted returns, where D is the diameter of the Markov decision process, X the size of the state space, A the size of the action space and T the number of time steps. However, this lower bound is general. A smaller regret can be obtained by taking into account some specific knowledge of the problem structure. In 31, we consider an admission control problem to an M/M/c/S queue with m job classes and class-dependent rewards and holding costs. Queuing systems often have a diameter that is exponential in the buffer size S, making the previous lower bound prohibitive for any practical use. We propose an algorithm inspired by UCRL2, and use the structure of the problem to upper bound the expected total regret by O(SlogT+mTlogT) in the finite server case. In the infinite server case, we prove that the dependence of the regret on S disappears.

Finding the Optimal Policy to Provide Energy for an Off-Grid Telecommunication Operator.

In 10, we analyze a networking system powered by solar panels, where the harvested energy is stored in a battery that can also be sold when fully charged. Then the networking operator faces dual objectives: maintaining the functionality of its infrastructure and selling (or supplying to other networks) the filled batteries. These two goals are contradictory as selling the battery's energy may result in operational disruptions (e.g., packet delays) during certain periods. To address these challenges, we have developed a Markovian Decision Process (MDP) model that integrates positive rewards for battery release as well as penalties for energy packet loss and battery depletion. From this modeling, we present the optimal policy that balances these conflicting objectives and maximizes an average reward function. We advocate that integrating the particular structure of the MDP will enhance efficiency and precision of the numerical analysis. We provide numerical comparisons from small-scale to large-scale models and present a detailed analysis of agent behavior under the optimal policy.

7.3.4 Learning and control for energy networks

Wind farm control with cooperative multi-agent reinforcement learning.

Maximizing the energy production in wind farms requires mitigating wake effects, a phenomenon by which wind turbines create sub-optimal wind conditions for the turbines located downstream. Finding optimal control strategies is however challenging, as high-fidelity models predicting complex aerodynamics are not tractable for optimization. Good experimental results have been obtained by framing wind farm control as a cooperative multi-agent reinforcement learning problem. In particular, several experiments have used an independent learning approach, leading to a significant increase of power output in simulated farms. Despite empirical success, the independent learning approach has no convergence guarantee due to non-stationarity. In 25, 40, show that the wind farm control problem can be framed as an instance of a transition-independent Decentralized Partially Observable Decentralized Markov Decision Process (Dec-POMDP) where the interdependence of agents dynamics can be represented by a directed acyclic graph (DAG). We show that for these problems, non-stationarity can be mitigated by a multi-scale approach, and show that a multi-scale Q-learning algorithm (MQL) where agents update local Q-learning iterates at different timescales guarantees convergence.

Towards fine tuning wake steering policies in the field: an imitation-based approach.

Yaw misalignment strategies can increase the power output of wind farms by mitigating wake effects, but finding optimal yaws requires overcoming both modeling errors and the growing complexity of the problem as the size of the farm grows. Recent works have therefore proposed decentralized multi-agent reinforcement learning (MARL) as a model-free, data-based alternative to learn online. These solutions have led to significant increases in total power production on experiments with both static and dynamic wind farms simulators. Yet experiments in dynamic simulations suggest that convergence time remains too long for online learning on real wind farms. As an improvement, baseline policies obtained by optimizing offline through steady-state models can be fed as inputs to an online reinforcement learning algorithm. This method however does not guarantee a smooth transfer of the policies to the real wind farm. This is aggravated when using function approximation approaches such as multi-layer neural networks to estimate policies and value functions. In 2, we propose an imitation approach, where learning a policy is first considered a supervised learning problem by deriving references from steady-state wind farm models, and then as an online reinforcement learning task for adaptation in the field. This approach leads to significant increases in the amount of energy produced over a lookup table (LUT) baseline on experiments done with the mid-fidelity dynamic simulator FAST.Farm under both static and varying wind conditions.

A decentralized algorithm for a Mean Field Control problem of Piecewise Deterministic Markov Processes.

The paper 8 provides a decentralized approach for the control of a population of N agents to minimize an aggregate cost. Each agent evolves independently according to a Piecewise Deterministic Markov dynamics controlled via unbounded jumps intensities. The N-agent high dimensional stochastic control problem is approximated by the limiting mean field control problem. A Lagrangian approach is proposed. Although the mean field control problem is not convex, it is proved to achieve zero duality gap. A stochastic version of the Uzawa algorithm is shown to converge to the primal solution. At each dual iteration of the algorithm, each agent solves its own small dimensional sub problem by means of the Dynamic Programming Principal, while the dual multiplier is updated according to the aggregate response of the agents. Finally, this algorithm is used in a numerical simulation to coordinate the charging of a large fleet of electric vehicles in order to track a target consumption profile.

Forecast Trading as a Means to Reach Social Optimum on a Peer-to-Peer Market.

The paper 29, investigates the coupling between a peer-to-peer (P2P) electricity market and a forecast market to alleviate the uncertainty faced by prosumers regarding their renewable energy sources (RES) generation. The work generalizes the analysis from Gaussian-distributed RES production to arbitrary distributions. The P2P trading is modeled as a generalized Nash equilibrium problem, where prosumers trade energy in a decentralized manner. Each agent has the option to purchase a forecast on the forecast market before trading on the electricity market. We establish conditions on arbitrary probability density functions (pdfs) under which the prosumers have incentives to purchase forecasts on the forecast market. Connected with the previous results, this allows us to prove the economic efficiency of the P2P electricity market, i.e., that a social optimum can be reached among the prosumers.

7.4 Software and datasets

WFCRL: A Multi-Agent Reinforcement Learning Benchmark for Wind Farm Control.

The wind farm control problem is challenging, since conventional model-based control strategies require tractable models of complex aerodynamical interactions between the turbines and suffer from the curse of dimension when the number of turbines increases. Recently, model-free and multi-agent reinforcement learning approaches have been used to address this challenge. In 24, we introduce WFCRL (Wind Farm Control with Reinforcement Learning), the first open suite of multi-agent reinforcement learning environments for the wind farm control problem. WFCRL frames a cooperative Multi-Agent Reinforcement Learning (MARL) problem: each turbine is an agent and can learn to adjust its yaw, pitch or torque to maximize the common objective (e.g. the total power production of the farm). WFCRL also offers turbine load observations that will allow to optimize the farm performance while limiting turbine structural damages. Interfaces with two state-of-the-art farm simulators are implemented in WFCRL: a static simulator (FLORIS) and a dynamic simulator (FAST.Farm). For each simulator, 10 wind layouts are provided, including 5 real wind farms. Two state-of-the-art online MARL algorithms are implemented to illustrate the scaling challenges. As learning online on FAST.Farm is highly time-consuming, WFCRL offers the possibility of designing transfer learning strategies from FLORIS to FAST.Farm.

Kreyòl-MT: Building MT for Latin American, Caribbean and Colonial African Creole Languages.

Claire Bizon-Monroc is a co-author of a paper describing a dataset for Creole language MT 9. A majority of language technologies are tailored for a small number of high-resource languages, while relatively many low-resource languages are neglected. One such group, Creole languages, have long been marginalized in academic study, though their speakers could benefit from machine translation (MT). These languages are predominantly used in much of Latin America, Africa and the Caribbean. We present the largest cumulative dataset to date for Creole language MT, including 14.5M unique Creole sentences with parallel translations – 11.6M of which we release publicly, and the largest bitexts gathered to date for 41 languages – the first ever for 21. In addition, we provide MT models supporting all 41 Creole languages in 172 translation directions. Given our diverse dataset, we produce a model for Creole language MT exposed to more genre diversity than ever before, which outperforms a genre-specific Creole MT model on its own benchmark for 26 of 34 translation directions.

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

Participants: Laurent Massoulié, Maxime Leiber.

CIFRE PhD thesis of Maxime Leiber with SAFRAN.

9 Partnerships and cooperations

9.1 International research visitors

9.1.1 Visits of international scientists

Inria International Chair

Participants: Sean Meyn, ARGO and SIERRA teams.

Prof Sean Meyn (University of Florida) is holding Inria International Chair from 2019 to 2025.

9.2 National initiatives

9.2.1 Project REDEEM, PEPR IA

Project REDEEM aims to explore new distributed learning approaches that are resilient, robust to noise and adversarial attacks, and respectful of privacy. These distributed approaches should make it possible to go beyond current federated learning. From a theoretical point of view, REDEEM aims to provide a solid foundation for the proposed approaches, particularly in the case of malicious protagonists participating in the learning phase, and with the overriding objective of ensuring data confidentiality as far as possible. In addition to new approaches to distributing learning, REDEEM also aims for efficient implementations, by offering the community open-source code and tools.

9.2.2 Project AI-NRGY, PEPR TASE

The objective of AI-NRGY is to address the major constraints of tomorrow's energy networks (highly distributed, dynamic, heterogeneous, critical and sometimes volatile) by contributing to the implementation of distributed intelligence solutions leveraging different computing methods (edge, fog and cloud computing), and by proposing a software architecture as well as the methods, models and algorithms necessary for the implementation of distributed intelligence solutions likely to accelerate the digitalization of energy networks.

9.2.3 Challenge Inria FedMalin

Members of ARGO participate in the FedMalin Inria défi on Federated Learning.

9.2.4 Challenge INRIA-EDF

Ashok Krishnan Komalan Sindhu is a post-doc within INRIA-EDF challenge on Managing tomorrow's power systems, collaborating with A. Bušić and Hélène Le Cadre (INOCS, INRIA Lille)'

The challenge entitled “managing tomorrow's power systems” aims to imagine and develop new tools (methods, regulatory approach, algorithms, software) and new sources of information (meters, sensors at higher temporal and spatial resolutions, weather forecasts, electric vehicle charging stations, etc.) to support strategic and operational decisions for economically, ecologically and resilient management of new power systems as part of the ecological transition and and climate change.

More specifically, the operational implementation of the scenarios defined by RTE for the evolution of the French power system requires the development of mechanisms and tools to enable decisions to be taken from the long term to the short term.

9.2.5 Challenge LLM4CODE

M. Lelarge participates to the challenge LLM4CODE: Reliable and Productive Code Assistants Based on Large Language Models

Generative AI, in particular the recent Large Language Models (LLMs), show great promise for software developments. Specialized models are now able to perform an impressive variety of programming tasks: solving programming exercises, assisting software developers, or even generating mechanized proofs. Yet, many challenges still need to be addressed to build reliable and productive LLM-based coding assistants: improving the quality of the generated code, increasing the developers' confidence in the generated code, enabling interaction with other software development tools (verification, test), and providing new capabilities (automated migration and evolution of software).

The goal of the Challenge Inria LLM4Code is to leverage LLM capabilities to build code assistants that can enhance both reliability and productivity.

9.2.6 Joint IFPEN - Inria laboratory

The joint laboratory IFPEN - Inria Convergence HPC / AI / HPDA for the energetic transition is a strategic partnership between IFPEN and Inria under the form of a “contrat-cadre”, started in June 2020 in continuation of previous collaborations. The goal of this laboratory without premises is to promote the collaborations between the two institutions on account of the energy transition. The two organizations propose here funds for thesis and post-doctoral projects on the basis of an annual call for proposals.

Two co-supervised PHD thesis within IFPEN - Inria laboratory:

  • Claire Bizon Monroc, defended in November 2024, co-supervised by A. Bušić, Donatien Dubuc (IFPEN), and Jiamin Zhu (IFPEN).
  • Baptiste Corban, started in November 2024, co-supervised by A. Bušić, Donatien Dubuc (IFPEN), and Jiamin Zhu (IFPEN).

Patent application: 41.

9.2.7 GdR ROD

A. Bušić is coordinating, with E. Hyon (LIP 6), the working group COSMOS (Stochastic optimization and control, modeling and simulation), of GdR-ROD (Recherche Opérationelle et Décision).

9.3 Regional initiatives

9.3.1 PRAIRIE Institute

Participants: Marc Lelarge, Laurent Massoulié.

The Prairie Institute (PaRis AI Research InstitutE) is one of the four French Institutes for Interdisciplinary Artificial Intelligence Research (3IA). It brings together five academic partners (CNRS, Inria, Institut Pasteur, PSL University, and University of Paris) as well as 13 industrial partners.

10 Dissemination

Participants: All ARGO.

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

Member of the organizing committees

10.1.2 Invited talks

10.1.3 Research administration

M. Lelarge is a member of the steering Committee of Fondation Sciences Mathématiques de Paris (FSMP)

A. Bušić is a member of the laboratory board of DIENS.

L. Viennot is member of the committee for health and working conditions (FSS) at Paris Inria research center.

10.2 Teaching - Supervision - Juries

10.2.1 Teaching

Most of the permanent researchers teach and are responsible for courses proposed within DIENS at the L3/M1 level and in M2 master programs at PSL and/or Paris Saclay: MASH, MASEF, MPRI, MVA.

L. Budzynski is MdC at DIENS, Ecole Normale Supérieure, PSL.

A. Bušić is Proffesseur Attaché IA at PSL University.

M. Lelarge is Proffesseur Attaché at ENS, PSL.

Marc Lelarge is responsible for the « cursus maths-info » at ENS.

Some courses in 2024:

  • L. Viennot teaches "Theory of practical graph algorithms" (12h), M2 level, MPRI master program.
  • A. Bušić and L. Massoulié teach "Foundations of network models" (24h), M2 level, MPRI
  • A. Bušić teaches "Reinforcement learning" (24h), M2 level MASH/MASEF
  • A. Bušić teaches "Modèles et algorithmes des réseaux" (24h), M1 level DI ENS
  • A. Bušić and L. Budzynski. teach "Structures et algorithmes aléatoires" (24h), L3 level, DI ENS
  • A. Bušić teaches an introductory course to reinforcement learning (18h), within the PSL Data science & AI for academics training progamme, aimed to a broad scientific audience (all disciplines within PSL).
  • Marc Lelarge is teaching in the course « Preparation de l'agregation d'informatique »
  • Marc Lelarge is teaching a machine learning course in the master ICFP

10.2.2 Supervision

PhD defended:

  • Claire Bizon Monroc, Multi-agent reinforcement learning for dynamic wind farm control 36, defended in November 2024, supervised by A. Bušić, Donatien Dubuc (IFPEN), and Jiamin Zhu (IFPEN).
  • Mathieu Even, Towards Decentralization, Asynchrony, Privacy and Personalization in Federated Learning 34, defended in June 2024, supervised by L. Massoulié.
  • Maxime Leiber, Adaptive time-frequency analysis and data normalization : contributions to monitoring under varying conditions 35, defended in February 2024, supervised by L. Massoulié.
  • Matthieu Blanke, Deep Learning for Physical Systems: Estimation, Adaptation and Exploration 32, defended in September 2024, supervised by M. Lelarge.

PhD in progress:

  • Romain Cosson, since Sep 2022, supervised by Laurent Massoulié
  • Thomas Le Corre, since Nov 2021, supervised by A. Bušić
  • Jakob Maier, since Oct 2022, supervised by Laurent Massoulié
  • David Robin, since Oct 2021, supervised by Laurent Massoulié
  • Lucas Weber since Oct 2021, supervised by A. Bušić and J. Zhu (IFPEN)
  • Killian Bakong Epoune, since Sep 2023, supervised by L. Massoulié and K. Scaman
  • Baptiste Corban IFPEN, since Nov 2024, supervised by A. Bušić, D. Dubuc (IFPEN) and J. Zhu (IFPEN)
  • Jean Adrien Lagesse, since Sep 2023, supervised by M. Lelarge
  • Shu Li, since Sep 2023, supervised by A. Bušić.
  • Jakob Maier, since Sep 2022, supervised by Laurent Massoulié
  • Jules Sintes, since Nov 2024, supervised by A. Bušić.
  • Martin Van Waerebeke, supervised by K. Scaman
  • Jules Viennot, since sep 2024, supervised by M. Lelarge

Marc Lelarge supervised the internships of Maxime Muhlethaler and Pierre-Gabriel Berlureau on diffusions for graph neural networks.

10.2.3 Juries

Reviewer in HDR juries:

  • Marc Lelarge: reviewer for Nicolas Tremblay, "Graph signals, structures and sketches" (Université Grenoble Alpes).

Reviewer in PHD juries:

  • Ana Bušić: Pierre Clavier, "Robust Reinforcement Learning : Theory and Practice" (Institut polytechnique de Paris), advisors: Erwan Le pennec, Stéphanie Allassonnière.
  • Marc Lelarge: Alexandre Duval, "Automatic learning on graphs: from explainability to climate action" (Université Paris-Saclay), advisor: Fragkiskos Malliaros.
  • Marc Lelarge: Gabriel Damay, "Dynamic Decision Trees and Community-based Graph Embeddings : towards Interpretable Machine Learning" (Institut polytechnique de Paris), advisor: Mauro Sozio.
  • Laurent Massoulié: Madeleine Kubasch, "Approximation of stochastic models for epidemics on large multi-level graphs" (Institut polytechnique de Paris), advisors: Vincent Bansaye, Elisabeta Vergu.
  • L. Viennot: Thimotée Corsini, “Reachability in temporal graphs and related problems” (LABRI, Université de Bordeaux), advisor: Arnaud Casteigts.

Examinator in HDR juries:

  • Marc Lelarge: Stephane Caron (PSL).

Examinator in PHD juries:

  • J-M. Fourneau: Mi Chen, on "Sécurité et évaluation des performances de LoRaWAN pour l'Internet des objets" (LACL, Paris 12, Creteil), advisor: Lynda Mokdad.
  • Marc Lelarge: Ndèye Maguette Mbaye, "Multimodal learning to predict breast cancer prognosis" (PSL), advisor: Chloé-Agathe Azencott.
  • Marc Lelarge: Daniel Hesslow, "Limiting factors for the continued scaling of Large Langauge Models" (Bourgogne Franche-Comté), advisor: Daniel Brunner.
  • Marc Lelarge: Oumayma Bounou, "Learning dynamic models for robotic systems control from measurements" (PSL), advisor: Jean Ponce.

10.3 Popularization

10.3.1 Productions (articles, videos, podcasts, serious games, ...)

Martin Van Waerebeke has published an artcle in "The Conversation France": Apprendre à oublier : le nouveau défi de l'intelligence artificielle. Sep 2024.

Marc Lelarge made a podcast with Claire Mathieu about AI

10.3.2 Participation in Live events

Marc Lelarge made a general presentation about AI at Ministere de l'Education (28/03)

Marc Lelarge made a Masterclass on AI with Claire Mathieu at FranceTV (24/09)

11 Scientific production

11.1 Publications of the year

International journals

Invited conferences

  • 9 inproceedingsN.Nathaniel Robinson, R.Raj Dabre, A.Ammon Shurtz, R.Rasul Dent, O.Onenamiyi Onesi, C. B.Claire Bizon Monroc, L.Loïc Grobol, H.Hasan Muhammad, A.Ashi Garg, N.Naome Etori, V. M.Vijay Murari Tiyyala, O.Olanrewaju Samuel, M. D.Matthew Dean Stutzman, B. B.Bismarck Bamfo Odoom, S.Sanjeev Khudanpur, S.Stephen Richardson and K.Kenton Murray. Kreyòl-MT: Building MT for Latin American, Caribbean and Colonial African Creole Languages: Building MT for Latin American, Caribbean and Colonial African Creole Languages.Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies2024 Annual Conference of the North American Chapter of the Association for Computational LinguisticsVolume 1: Long PapersMexico City, MexicoJune 2024, 3083–3110HALback to text

International peer-reviewed conferences

  • 10 inproceedingsY.Youssef Ait El Mahjoub and J.-M.Jean-Michel Fourneau. Finding the Optimal Policy to Provide Energy for an Off-Grid Telecommunication Operator.WiMobWiMob 2024 - 20th International Conference on Wireless and Mobile Computing, Networking and CommunicationsParis, FranceOctober 2024HALDOIback to text
  • 11 inproceedingsX.Xingjian Bai, C.Christian Coester and R.Romain Cosson. Unweighted Layered Graph Traversal: Passing a Crown via Entropy Maximization.SODA 2025 - Symposium on Discrete AlgorithmsNew Orleans (Louisiane), United StatesSociety for Industrial and Applied MathematicsJanuary 2025, 3884-3900HALDOIback to text
  • 12 inproceedingsS.Stéphane Bessy, S.Stéphan Thomassé and L.Laurent Viennot. Temporalizing Digraphs via Linear-Size Balanced Bi-Trees.STACS 2024 - 41st International Symposium on Theoretical Aspects of Computer Science289Leibniz International Proceedings in Informatics (LIPIcs)Clermont-Ferrand, FranceSchloss Dagstuhl – Leibniz-Zentrum für Informatik2024, 13:1–13:12HALDOIback to text
  • 13 inproceedingsM.Matthieu Blanke and M.Marc Lelarge. Interpretable Meta-Learning of Physical Systems.ICLR 2024 - The Twelfth International Conference on Learning RepresentationsVienne, AustriaMay 2024HALback to text
  • 14 inproceedingsF.Filippo Brunelli, P.Pierluigi Crescenzi and L.Laurent Viennot. Making Temporal Betweenness Computation Faster and Restless.Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024, Barcelona, Spain, August 25-29, 2024KDD '24: The 30th ACM SIGKDD Conference on Knowledge Discovery and Data MiningBarcelona, SpainACMAugust 2024, 163-174HALDOIback to text
  • 15 inproceedingsR.Romain Cosson. Breaking the k/ log k Barrier in Collective Tree Exploration via Tree-Mining.SODA 2024 - ACM-SIAM Symposium on Discrete AlgorithmsAlexandria, VA, United StatesSIAMJanuary 2024, 4264-4282HALDOIback to text
  • 16 inproceedingsR.Romain Cosson and L.Laurent Massoulié. Barely Random Algorithms and Collective Metrical Task Systems.NeurIPS 2024 - Conference on Neural Information Processing SystemsVancouver (BC), CanadaNovember 2024HALback to text
  • 17 inproceedingsR.Romain Cosson and L.Laurent Massoulié. Collective Tree Exploration via Potential Function Method.ITCS 2024 - 15th Innovations in Theoretical Computer Science ConferenceBerkeley, CA, United States2024HALDOIback to text
  • 18 inproceedingsD.David Coudert, M.Mónika Csikós, G.Guillaume Ducoffe and L.Laurent Viennot. Practical Computation of Graph VC-Dimension.SEA 2024 - Symposium on Experimental Algorithms301Vienne, AustriaSchloss Dagstuhl – Leibniz-Zentrum für Informatik2024, 20HALDOIback to text
  • 19 inproceedingsF. F.Feodor F. Dragan, G.Guillaume Ducoffe, M.Michel Habib and L.Laurent Viennot. Certificates in P and Subquadratic-Time Computation of Radius, Diameter, and all Eccentricities in Graphs.Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)New Orleans (LA), United StatesOctober 2024, 2157--2193HALDOIback to text
  • 20 inproceedingsM.Mathieu Even, L.Luca Ganassali, J.Jakob Maier and L.Laurent Massoulié. Aligning Embeddings and Geometric Random Graphs: Informational Results and Computational Approaches for the Procrustes-Wasserstein Problem.NeurIPS 2024 - 38th Conference on Neural Information Processing SystemsVancouver (BC), CanadaDecember 2024HALback to text
  • 21 inproceedingsJ.-M.Jean-Michel Fourneau and M.Moyi Yang. Strong Aggregation in the Stochastic Matching Model with Random Discipline.Springer LNCS 14826ASMTA 2024 - International Conference on Analytical & Stochastic Modelling Techniques and Applications14826Lecture Notes in Computer ScienceVenise, ItalySpringer Nature SwitzerlandSeptember 2024, 18-32HALDOIback to text
  • 22 inproceedingsY.Yann Fraboni, M.Martin van Waerebeke, R.Richard Vidal, L.Laetitia Kameni, K.Kevin Scaman and M.Marco Lorenzi. SIFU: Sequential Informed Federated Unlearning for Efficient and Provable Client Unlearning in Federated Optimization.PMLR proceedingsAISTATS 2024 - International Conference on Artificial Intelligence and StatisticsPMLR-238International Conference on Artificial Intelligence and Statistics, 2-4 May 2024, Palau de Congressos, Valencia, SpainValencia, SpainMay 2024HALback to text
  • 23 inproceedingsB.Batiste Le Bars, A.Aurélien Bellet, M.Marc Tommasi, K.Kevin Scaman and G.Giovanni Neglia. Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm.ICML 2024 - The Forty-first International Conference on Machine LearningVienne, AustriaJuly 2024HALback to text
  • 24 inproceedingsC. B.Claire Bizon Monroc, A.Ana Bušić, D.Donatien Dubuc and J.Jiamin Zhu. WFCRL: A Multi-Agent Reinforcement Learning Benchmark for Wind Farm Control.Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks TrackThirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks TrackVancouver, CanadaDecember 2024HALback to text
  • 25 inproceedingsC. B.Claire Bizon Monroc, A.Ana Bušić, D.Donatien Dubuc and J.Jiamin Zhu. Wind farm control with cooperative multi-agent reinforcement learning.ICML 2024 Workshop: Aligning Reinforcement Learning Experimentalists and TheoristsICML 2024 Workshop : Aligning Reinforcement Learning Experimentalists and Theorists (ARLET 2024)Vienne, AustriaJuly 2024HALback to text
  • 26 inproceedingsC.Constantin Philippenko, K.Kevin Scaman and L.Laurent Massoulié. In-depth Analysis of Low-rank Matrix Factorisation in a Federated Setting.AAAI 2025 - 39th Annual AAAI Conference on Artificial IntelligencePhiladelphia, United StatesFebruary 2025HALback to text
  • 27 inproceedingsD. A.David A. R. Robin, K.Kevin Scaman and M.Marc Lelarge. Random Sparse Lifts: Construction, Analysis and Convergence of finite sparse networks.ICLR 2024 - Twelfth International Conference on Learning RepresentationsVienna, AustriaMay 2024HALback to text
  • 28 inproceedingsK.Kevin Scaman, M.Mathieu Even, B. L.Batiste Le Bars and L.Laurent Massoulié. Minimax Excess Risk of First-Order Methods for Statistical Learning with Data-Dependent Oracles.AISTATS 2024 - International Conference on Artificial Intelligence and StatisticsValencia, Spain2024HALback to text
  • 29 inproceedingsI.Ilia Shilov, H.Hélène Le Cadre, A.Ana Bušić, A.Anibal Sanjab and P.Pierre Pinson. Forecast Trading as a Means to Reach Social Optimum on a Peer-to-Peer Market.NETGCOOP 2024Netgcoop 2024 - 11th International Conference on Network Games, Control and OptimizationLille, FranceOctober 2024HALback to text
  • 30 inproceedingsL.Laetitia Teodorescu, G.Guillaume Baudart, E. J.Emilio Jesús Gallego Arias and M.Marc Lelarge. NLIR: Natural Language Intermediate Representation for Mechanized Theorem Proving.MathAI@NeuRIPS 2024 - 4th Workshop on Mathematical Reasoning and AIVancouver, CanadaDecember 2024HALback to text
  • 31 inproceedingsL.Lucas Weber, A.Ana Bušić and J.Jiamin Zhu. Reinforcement learning and regret bounds for admission control.Proceedings of the 41st International Conference on Machine LearningICML 2024 - 41st International Conference on Machine Learning2352148Vienna, AustriaJuly 2024, 52403 - 5242HALDOIback to text

Doctoral dissertations and habilitation theses

  • 32 thesisM.Matthieu Blanke. Deep learning for physical systems : estimation, adaptation and exploration.Université Paris sciences et lettresSeptember 2024HALback to text
  • 33 thesisA.Ana Bušić. Decision and control in networks with stochastic demand and supply.ENS-PSLMay 2024HAL
  • 34 thesisM.Mathieu Even. Towards decentralization, asynchrony, privacy and personalization in federated learning.Université Paris sciences et lettresJune 2024HALback to text
  • 35 thesisM.Maxime Leiber. Adaptive time-frequency analysis and data normalization : contributions to monitoring under varying conditions.Ecole Normale Supérieure (ENS)February 2024, 1-5HALback to text
  • 36 thesisC. B.Claire Bizon Monroc. Multi-agent reinforcement learning for dynamic wind farm control.Ecole normale supérieure - PSLNovember 2024HALback to text

Reports & preprints

Patents