Keywords
Computer Science and Digital Science
 A1.2. Networks
 A1.6. Green Computing
 A3.4.1. Supervised learning
 A3.4.4. Optimization and learning
 A3.4.6. Neural networks
 A3.4.8. Deep learning
 A3.5. Social networks
 A3.5.1. Analysis of large graphs
 A5.9. Signal processing
 A5.9.4. Signal processing over graphs
 A5.9.5. Sparsityaware processing
 A5.9.6. Optimization tools
 A8.8. Network science
 A8.9. Performance evaluation
 A9.2. Machine learning
 A9.7. AI algorithmics
Other Research Topics and Application Domains
 B2.6. Biological and medical imaging
 B6.2. Network technologies
 B6.4. Internet of things
 B6.6. Embedded systems
 B7.2.1. Smart vehicles
 B9.5.1. Computer science
 B9.5.2. Mathematics
 B9.5.6. Data science
 B9.10. Privacy
1 Team members, visitors, external collaborators
Research Scientists
 Paulo Gonçalves [Team leader, Inria, Senior Researcher, HDR]
 Rémi Gribonval [Team leader, Inria, Senior Researcher, HDR]
 Mathurin Massias [Inria, Researcher, from Nov 2021]
 Philippe Nain [Inria, Senior Researcher, until Aug 2021, HDR]
Faculty Members
 Thomas Begin [Univ Claude Bernard, Associate Professor, HDR]
 Anthony Busson [Univ Claude Bernard, Associate Professor, HDR]
 Christophe Crespelle [Univ Claude Bernard, Associate Professor, until Sep 2021, HDR]
 Marion Foare [École supérieure de chimie physique électronique de Lyon, Associate Professor]
 Isabelle Guérin Lassous [Univ Claude Bernard, Professor, HDR]
 Elisa Riccietti [École Normale Supérieure de Lyon, Associate Professor, Chaire Inria]
PostDoctoral Fellows
 Ayoub Belhadji [École Normale Supérieure de Lyon]
 Luc Giffon [École Normale Supérieure de Lyon, from Feb 2021]
 Vincent Schellekens [Inria, from Oct 2021 to Dec 2021]
 Marija Stojanova [Univ Claude Bernard, from Jul 2021 until Nov 2021]
 Titouan Vayer [École Normale Supérieure de Lyon]
PhD Students
 Lafdal Abdelwedoud [Gouvernement de Mauritanie, until Aug 2021]
 Dominique Barbe [École Normale Supérieure de Lyon]
 Anthony Bardou [École Normale Supérieure de Lyon, until Nov 2021]
 Nour El Houda Bouzouita [École Normale Supérieure de Lyon, until Nov 2021]
 Israel CamperoJurado [École Normale Supérieure de Lyon, until Mar 2021]
 Sicheng Dai [École Normale Supérieure de Lyon]
 Antoine Gonon [École Normale Supérieure de Lyon, from Sep 2021]
 Clement Lalanne [École Normale Supérieure de Lyon]
 Guillaume Lauga [Inria, from Nov 2021]
 Quoc Tung Le [École Normale Supérieure de Lyon]
 Samir SiMohammed [Stakeo, until Nov 2021]
 Pierre Stock [Facebook, CIFRE, until Mar 2021]
 Leon Zheng [VALEO, CIFRE, from May 2021]
Technical Staff
 Hakim HadjDjilani [École Normale Supérieure de Lyon, until Jul 2021, Development Engineer]
 Leon Zheng [École Normale Supérieure de Lyon, Engineer, until May 2021]
Interns and Apprentices
 Manon Billet [École Normale Supérieure de Lyon, from May 2021 until Jul 2021]
 Amel Chadda [Inria, until Jul 2021]
 Antoine Gonon [École Normale Supérieure de Lyon, from Mar 2021 until Jul 2021]
 Hugo Gouttenegre [Univ de Lyon, from May 2021 until Aug 2021]
 Federico Grillini [École Normale Supérieure de Lyon, from Apr 2021 until Jun 2021]
 Esther Guerin [École Normale Supérieure de Lyon, until Jul 2021]
 Malasri Janumporn [Univ Claude Bernard, from Jul 2021 until Aug 2021]
 Giovanni Seraghiti [École Normale Supérieure de Lyon, from Sep 2021 until Nov 2021]
Administrative Assistant
 Solene Audoux [Inria]
External Collaborators
 Yohann De Castro [École centrale de Lyon, Professor, HDR]
 Eric Guichard [École nationale supérieure des sciences de l'information et des bibliothèques, until Oct 2021, HDR]
 Márton Karsai [Université d'Europe centrale VienneAutriche, HDR]
2 Overall objectives
2.1 Evolution of the team and scope of this activity report
After more than 9 years of existence of DANTE as a team focused on dynamic networks at large, and in the context of a strengthening of the research activities in statistical machine learning and signal processing in DANTE (and more largely on the academic site of Lyon SaintEtienne), it was decided to split the DANTE team into two new teams. All activities related to network communications (with a focus on wireless networks and performance evaluation) are now part of the new team HowNet and are not covered in this scientific report. The new scientific contours of DANTE are described below and this activity report focuses on the machine learning and signal processing activities of DANTE in 2021.
2.2 New objectives of the team
Building on a culture at the interface of signal modeling, mathematical optimization and statistical machine learning, the global objective of DANTE is to develop computationally efficient and mathematically founded methods and models to process highdimensional data. Our ambition is to develop frugal signal processing and machine learning methods able to exploit structured models, intrinsically associated to resourceefficient implementations, and endowed with solid statistical guarantees.
Challenge 1: Developing frugal methods with robust expressivity.
The idea of frugal approaches means algorithms relying on a controlled use of computing resources, but also methods whose expressivity and flexibility provably relies on the versatile notion of sparsity. This is expected to avoid the current pitfalls of costly overparameterizations and to robustify the approaches with respect to adversarial examples and overfitting. More specifically, it is essential to contribute to the understanding of methods based on neural networks, in order to improve their performance and most of all, their efficiency in resourcelimited environments.
Challenge 2: Integrating models in learning algorithms.
To make statistical machine learning both more frugal and more interpretable, it is important to develop techniques able to exploit not only highdimensional data but also models in various forms when available. When some partial knowledge is available about some phenomena related to the processed data, e.g. under the form of a physical model such as a partial differential equation, or as a graph capturing local or nonlocal correlations, the goal is to use this knowledge as an inspiration to adapt machine learning algorithms. The main challenge is to flexibly articulate a priori knowledge and datadriven information, in order to achieve a controlled extrapolation of predicted phenomena much beyond the particular type of data on which they were observed, and even in applications where training data is scarce.
Challenge 3: Guarantees on interpretability, explainability, and privacy.
The notion of sparsity and its structured avatars –notably via graphs– is known to play a fundamental role in ensuring the identifiability of decompositions in latent spaces, for example for highdimensional inverse problems in signal processing. The team's ambition is to deploy these ideas to ensure not only frugality but also some level of explainability of decisions and an interpretability of learned parameters, which is an important societal stake for the acceptability of “algorithmic decisions”. Learning in smalldimensional latent spaces is also a way to spare computing resources and, by limiting the public exposure of data, it is expected to enable tunable and quantifiable tradeoffs between the utility of the developed methods and their ability to preserve privacy.
3 Research program
This project is resolutely at the interface of signal modeling, mathematical optimization and statistical machine learning, and concentrates on scientific objectives that are both ambitious –as they are difficult and subject to a strong international competition– and realistic thanks to the richness and complementarity of skills they mobilize in the team.
Sparsity constitutes a backbone for this project, not only as a target to ensure resourceefficiency and privacy, but also as prior knowledge to be exploited to ensure the identifiability of parameters and the interpretability of results. Graphs are its necessary alter ego, to flexibly model and exploit relations between variables, signals, and phenomena, whether these relations are known a priori or to be inferred from data. Lastly, advanced largescale optimization is a key tool to handle in a statistically controlled and algorithmically efficient way the dynamic and incremental aspects of learning in varying environments.
The scientific activity of the project is articulated around the three axes described below. A common endeavor to these three axes consists in designing structured lowdimensional models, algorithms of bounded complexity to adjust these models to data through learning mechanisms, and a control of the performance of these algorithms to exploit these models on tasks ranging from lowlevel signal processing to the extraction of highlevel information.
3.1 Axis 1: Sparsity for highdimensional learning.
As now widely documented, the fact that a signal admits a sparse representation in some signal dictionary 62 is an enabling factor not only to address a variety of inverse problems with highdimensional signals and images, such as denoising, deconvolution, or declipping, but also to speedup or decrease the cost of the acquisition of analog signals in certain scenarios compatible with compressive sensing 63, 56. The flexibility of the models, which can incorporate learned dictionaries 73, as well as structured and/or lowrank variants of the nowclassical sparse modeling paradigm 66, has been a key factor of the success of these approaches. Another important factor is the existence of algorithms of bounded complexity with provable performance, often associated to convex regularization and proximal strategies 55, 59, allowing to identify latent sparse signal representations from lowdimensional indirect observations.
While being now wellmastered (and in the core field of expertise of the team), these tools are typically constrained to relatively rigid settings where the unknown is described either as a sparse vector or a lowrank matrix or tensor in high (but finite) dimension. Moreover, the algorithms hardly scale to the dimensions needed to handle inverse problems arising from the discretization of physical models (e.g., for 3D wavefield reconstruction). A major challenge is to establish a comprehensive algorithmic and theoretical toolset to handle continuous notions of sparsity 57, which have been identified as a way to potentially circumvent these bottlenecks. The other main challenge is to extend the sparse modeling paradigm to resourceefficient and interpretable statistical machine learning. The methodological and conceptual output of this axis provides tools for Axes 2 and 3, which in return fuel the questions investigated in this axis.

1.1 Versatile and efficient sparse modeling. The goal is to propose flexible and resourceefficient sparse models, possibly leveraging classical notions of dictionaries and structured factorization, but also the notion of sparsity in continuous domains (e.g. for sketched clustering, mixture model estimation, or image superresolution), lowrank tensor representations, and neural networks with sparse connection patterns.
Besides the empirical validation of these models and of the related algorithms on a diversity of targeted applications, the challenge is to determine conditions under which their success can be mathematically controlled, and to determine the fundamental tradeoffs between the expressivity of these models and their complexity.
 1.2 Sparse optimization. The main objectives are: a) to define cost functions and regularization penalties that integrate not only the targeted learning tasks, but also a priori knowledge, for example under the form of conservation laws or as relation graphs, cf Axis 2; b) to design efficient and scalable algorithms 4, 9 to optimize these cost functions in a controlled manner in a largescale setting. To ensure the resourceefficiency of these algorithms, while avoiding pitfalls related to the discretization of highdimensional problems (aka curse of dimensionality), we investigate the notion of “continuous” sparsity (i.e., with sparse measures), of hierarchies (along the ideas of multilevel methods), and of reduced precision (cf also Axis 3). The nonconvexity and nonsmoothness of the problems are key challenges 2, and the exploitation of proximal algorithms and/or convexifications in the space of Borelian measures are privileged approaches.
 1.3 Identifiability of latent sparse representations. To provide solid guarantees on the interpretability of sparse models obtained via learning, one needs to ensure the identifiability of the latent variables associated to their parameters. This is particularly important when these parameters bear some meaning due to the underlying physics. Viceversa, physical knowledge can guide the choice of which latent parameters to estimate. By leveraging the team's knowhow obtained in the field of inverse problems, compressive sensing and source separation in signal processing, we aim at establishing theoretical guarantees on the uniqueness (modulo some equivalence classes to be characterized) of the solutions of the considered optimization problems, on their stability in the presence of random or adversarial noise, and on the convergence and stability of the algorithms.
3.2 Axis 2: Learning on graphs and learning of graphs.
Graphs provide synthetic and sparse representations of the interactions between potentially highdimensional data, whether in terms of proximity, statistical correlation, functional similarity, or simple affinities. One central task in this domain is how to infer such discrete structures, from the observations, in a way that best accounts for the ties between data, without becoming too complex due to spurious relationships. The graphical lasso 64 is among the most popular and successful algorithm to build a sparse representation of the relations between time series (observed at each node) and that unveils relevant patterns of the data. Recent works (e.g. 67) strived to emphasize the clustered structure of the data by imposing spectral constraints to the Laplacian of the sought graphs, with the aim to improve the performance of spectral approaches to unsupervised classification. In this direction, several challenges remain, such as for instance the transposition of the framework to graphbased semisupervised learning 1, where natural models are stochastic block models rather than strictly multicomponent graphs (e.g. Gaussian mixtures models). As it is done in 77, the standard ${l}_{1}$norm penalization term of graphical lasso could be questioned in this case. On another level, when lowrank (precision) matrices and / or when preservation of privacy are important stakes, one could be inspired by the sketching techniques developed in 65 and 58 to work out a sketched graphical lasso. There exists other situations where the graph is known a priori and does not need to be inferred from the data. This is for instance the case when the data naturally lie on a graph (e.g. social networks or geographical graphs) and so, one has to combine this data structure with the attributes (or measures) carried by the nodes or the edges of these graphs. Graph signal processing (GSP) 7010, which underwent methodological developments at a very rapid pace in recent years, is precisely an approach to jointly exploit algebraically these structures and attributes, either by filtering them, by reorganizing them, or by reducing them to principal components. However, as it tends to be more and more the case, data collection processes yield very large data sets with high dimensional graphs. In contrast to standard digital signal processing that relies on regular graph structures (cycle graph or cartesian grid) treating complex structured data in a global form is not an easily scalable task 5. Hence, the notion of distributed GSP 60, 61 has naturally emerged. Yet, very little has been done on graph signals supported on dynamical graphs that undergo vertices/edges editions.
 2.1 Learning of graphs. When the graphical structure of the data is not known a priori, one needs to explore how to build it or to infer it. In the case of partially known graphs, this raises several questions in terms of relevance with respect to sparse learning. For example, a challenge is to determine which edges should be kept, whether they should be oriented, and how attributes on the graph could be taken into account (in particular when considering timeseries on graphs) to better infer the nature and structure of the unobserved interactions. We strive to adapt known approaches such as the graphical lasso to estimate the covariance under a sparsity constraint (integrating also temporal priors), and investigate diffusion approaches to study the identifiability of the graphs. In connection with Axis 1.2, a particular challenge is to incorporate a priori knowledge coming from physical models that offer concise and interpretable descriptions of the data and their interactions.

2.2 Distributed and adaptive learning on graphs. The availability of a known graph structure underlying training data offers many opportunities to develop distributed approaches, open perspectives where graph signal processing and machine learning can mutually fertilize each other.
Some classifiers can be formalized as solutions of a constrained optimization problem, and an important objective is then to reduce their global complexity by developing distributed versions of these algorithms. Compared to costly centralized solutions, distributing the operations by restricting them to local node neighborhoods will enable solutions that are both more frugal and more privacyfriendly. In the case of dynamic graphs, the idea is to get inspiration from adaptive processing techniques to make the algorithms able to track the temporal evolution of data, either in terms of structural evolution or of temporal variations of the attributes. This aspect finds a natural continuation in the objectives of Axis 3.
3.3 Axis 3: Dynamic and frugal learning.
With the resurgence of neural networks approaches in machine learning, training times of the order of days, weeks, or even months are common. Mainstream research in deep learning somehow applies it to an increasingly large class of problems and uses the general wisdom to improve the models prediction accuracy by “stacking more layers”, making the approach ever more resourcehungry. Underpinning theory on which resources are needed for a network architecture to achieve a given accuracy is still in its infancy. Efficient scaling of such techniques to massive sample sizes or dimensions in a resourcerestricted environment remains a challenge and is a particularly active field of academic and industrial R&D, with recent interest in techniques such as sketching, dimension reduction, and approximate optimization.
A central challenge is to develop novel approximate techniques with reduced computational and memory imprint. For certain unsupervised learning tasks such as PCA, unsupervised clustering, or parametric density estimation, random features (e.g. random Fourier features 68) allow to compute aggregated sketches guaranteed to preserve the information needed to learn, and no more: this has led to the compressive learning framework, which is endowed with statistical learning guarantees 65 as well as privacy preservation guarantees 58. A sketch can be seen as an embedding of the empirical probability distribution of the dataset with a particular form of kernel mean embedding 71. Yet, designing random features given a learning task remains something of an art, and a major challenge is to design provably good endtoend sketching pipelines with controlled complexity for supervised classification, structured matrix factorization, and deep learning.
Another crucial direction is the use of dynamical learning methods, capable of exploiting wisely multiple representations at different scales of the problem at hand. For instance, many low and mixedprecision variants of gradientbased methods have been recently proposed 75, 74, which are however based on a static reduced precision policy, while a dynamic approach can lead to much improved energyefficiency. Also, despite their massive success, gradientbased training methods still possess many weaknesses (low convergence rate, dependence on the tuning of the learning parameters, vanishing and exploding gradients) and the use of dynamical information promises to allow for the development of alternative methods, such as secondorder or multilevel methods, which are as scalable as firstorder methods but with faster convergence guarantees 69, 76.
The overall objective in this axis is to adapt in a controlled manner the information that is extracted from datasets or data streams and to dynamically use such information in learning, in order to optimize the tradeoffs between statistical significance, resourceefficiency, privacypreservation and integration of a priori knowledge.
 3.1 Compressive and privacypreserving learning. The goal is to compress training datasets as soon as possible in the processing workflow, before even starting to learn. In the spirit of compressive sensing, this is desirable not only to ensure the frugal use of ressources (memory and computation), but also to preserve privacy by limiting the diffusion of raw datasets and controlling the information that could actually be extracted from the targeted compressed representations, called sketches, obtained by wellchosen nonlinear random projections. We aim to build on a compressive learning framework developed by the team with the viewpoint that sketches provide an embedding of the data distribution, which should preserve some metrics, either associated to the specific learning task or to more generic optimal transport formulations. Besides ensuring the identifiability of the taskspecific information from a sketch (cf Axis 1.3), an objective is to efficiently extract this information from a sketch, for example via algorithms related to avatars of continuous sparsity as studied in Axis 1.2. A particular challenge, connected with Axis 2.1 when inferring dynamic graphs from correlation of nonstationary times series, and with Axis 3.2 below, is to dynamically adapt the sketching mechanism to the analyzed data stream.
 3.2 Sequential sparse learning. Whether aiming at dynamically learning on data streams (cf. Axes 2.1 and 2.2), at integrating a priori physical knowledge when learning, or at ensuring domain adaptation for transfer learning, the objective is to achieve a statistically nearoptimal update of a model from a sequence of observations whose content can also dynamically vary. When considering timeseries on graphs, to preserve resourceefficiency and increase robustness, the algorithms further need to update the current models by dynamically integrating the data stream.
 3.3 Dynamicprecision learning. The goal is to propose new optimization algorithms to overcome the cost of solving large scale problems in learning, by dynamically adapting the precision of the data. The main idea is to exploit multiple representations at different scales of the problem at hand. We explore in particular two different directions to build the scales of problems: a) exploiting ideas coming from multilevel optimization to propose dynamical hierarchical approaches exploiting representations of the problem of progressively reduced dimension; b) leveraging the recent advances in hardware and the possibility of representing data at multiple precision levels provided by them. We aim at improving over stateoftheart training strategies by investigating the design of scalable multilevel and mixedprecision secondorder optimization and quantization methods, possibly derivativefree.
4 Application domains
The primary objectives of this project, which is rooted in Signal Processing and Machine Learning methodology, are to develop flexible methods, endowed with solid mathematical foundations and efficient algorithmic implementations, that can be adapted to numerous application domains. We are nevertheless convinced that such methods are best developed in strong and regular connection with concrete applications, which are not only necessary to validate the approaches but also to fuel the methodological investigations with relevant and fruitful ideas. The following application domains are primarily investigated in partnership with research groups with the relevant expertise.
4.1 Frugal AI on embedded devices
There is a strong need to drastically compress signal processing and machine learning models (typically, but not only, deep neural networks) to fit them on embedded devices. For example, on autonomous vehicles, due to strong constraints (reliability, energy consumption, production costs), the memory and computing resources of dedicated highend imageanalysis hardware are two orders of magnitude more limited than what is typically required to run stateoftheart deep network models in realtime. The research conducted in the DANTE project finds direct applications in these areas, including: compressing deep neural networks to obtain lowbandwidth videocodecs that can run on smartphones with limited memory resources; sketched learning and sparse networks for autonomous vehicles; or sketching algorithms tailored to exploit optical processing units for energy efficient largescale learning.
4.2 Imaging in physics and medicine
Many problems in imaging involve the reconstruction of large scale data from limited and noisecorrupted measurements. In this context, the research conducted in DANTE pays a special attention to modeling domain knowledge such as physical constraints or prior medical knowledge. This finds applications from physics to medical imaging, including: multiphase flow image characterization; near infrared polarization imaging in circumstellar imaging; compressive sensing for joint segmentation and highresolution 3D MRI imaging; or graph signal processing for radio astronomy imaging with the Square Kilometer Array (SKA).
4.3 Interactions with computational social sciences
Based on collaborations with the relevant experts the team also regularly investigates applications in computational social science. For example, modeling infection disease epidemics requires efficient methods to reduce the complexity of large networked datasets while preserving the ability to feed effective and realistic datadriven models of spreading phenomena. In another area, estimating the vote transfer matrices between two elections is an illposed problem that requires the design of adapted regularization schemes together with the associated optimization algorithms.
5 Social and environmental responsibility
5.1 Contribution to the monitoring of the Covid19 pandemic
Robust prediction of the spatiotemporal evolution of the reproduction number $R\left(t\right)$ of the Covid19 pandemic from open data (SantéPubliqueFrance and the European Center for Disease Prevention).
Following our work of last year 54, where an algorithm exploiting sparsity and convex optimization was developed, and dynamic maps were proposed, we identified robustness to outliers as a critical issue.
This is addressed in a paper submitted for publication to a journal, using convex regularization 45.
6 Highlights of the year
P. Gonçalves was nominated Deputy Scientific Director of the new research center of Inria in Lyon.
R. Gribonval was a keynote speaker at the international conference EUSIPCO 2021 and an invited speaker at the national conference CAp21.
A survey paper on sketching for largescale learning, summarizing in tutorial style a series of works of the team, was published in the September 2021 issue of the IEEE Signal Processing Magazine and made its front cover 7.
7 New software and platforms
In an effort towards reproducible research, the default policy of the team is to release opensource code (typically python or matlab) associated to research papers that report experiments. When applicable and possible, more engineered software is developed and maintained over several years to provide more robust and consistent implementations of selected results.
7.1 New software
7.1.1 FAuST

Keywords:
Learning, Sparsity, Fast transform, Multilayer sparse factorisation

Scientific Description:
FAuST allows to approximate a given dense matrix by a product of sparse matrices, with considerable potential gains in terms of storage and speedup for matrixvector multiplications.

Functional Description:
FAUST is a C++ toolbox designed to decompose a given dense matrix into a product of sparse matrices in order to reduce its computational complexity (both for storage and manipulation).
Faust includes Matlab and Python wrappers and scripts to reproduce the experimental results of the following papers:  Le Magoarou L. and Gribonval R,. "Flexible multilayer sparse approximations of matrices and applications", Journal of Selected Topics in Signal Processing, 2016.  Le Magoarou L., Gribonval R., Tremblay N. "Approximate fast graph Fourier transforms via multilayer sparse", IEEE Transactions on Signal and Information Processing over Networks, 2018  QuocTung Le, Rémi Gribonval. Structured Support Exploration For Multilayer Sparse Matrix Factorization. ICASSP 2021 – IEEE International Conference on Acoustics, Speech and Signal Processing, Jun 2021, Toronto, Ontario, Canada. pp.15.  Sibylle Marcotte, Amélie Barbe, Rémi Gribonval, Titouan Vayer, Marc Sebban, et al.. Fast Multiscale Diffusion on Graphs. 2021.

Release Contributions:
Faust 1.x contains Matlab routines to reproduce experiments of the PANAMA team on learned fast transforms.
Faust 2.x contains a C++ implementation with preliminary Matlab / Python wrappers.
Faust 3.x includes Python and Matlab wrappers around a C++ core with GPU acceleration, new algorithms.

News of the Year:
In 2021, new algorithms bringing improved precision and/or accelerations were incorporated into Faust, GPU support was completed together with a systematic optimization of the code (including the ability to run it in float instead of double precision), and PIP packages were made available to ease the installation of faust.
In 2020, major efforts were put into finalizing Python wrappers, producing tutorials using Jupyter notebooks and Matlab livescripts, as well as substantial refactoring of the code to optimize its efficiency and exploit GPUs.
In april 2018, a Software Development Initiative (ADT REVELATION) started in for the maturation of FAuST. A first step was to complete and robustify Matlab wrappers, to code Python wrappers with the same functionality, and to setup a continuous integration process. A second step was to simplify the parameterization of the main algorithms. The roadmap for next year includes showcasing examples and optimizing computational efficiency.
In 2017, new Matlab code for fast approximate Fourier Graph Transforms have been included. based on the approach described in the papers:
Luc Le Magoarou, Rémi Gribonval, "Are There Approximate Fast Fourier Transforms On Graphs?", ICASSP 2016 .
Luc Le Magoarou, Rémi Gribonval, Nicolas Tremblay, "Approximate fast graph Fourier transforms via multilayer sparse approximations", IEEE Transactions on Signal and Information Processing over Networks,2017.
 URL:
 Publications:

Contact:
Remi Gribonval

Participants:
Luc Le Magoarou, Nicolas Tremblay, Remi Gribonval, Nicolas Bellot, Adrien Leman, Hakim HadjDjilani
8 New results
8.1 Graph Signal Processing, Optimal Transport and Machine Learning on Graphs
8.1.1 Works on GromovWasserstein: graph dictionary learning
Participants: Titouan Vayer.
Collaborations with Cédric VincentCuaz (PhD student, MAASAI, Université Côte d'Azur), Rémi Flamary (CMAP, Ecole Polytechnique), Marco Corneli (MAASAI, Université Côte d'Azur) and Nicolas Courty (IRISA, Université Bretagne Sud).
The GromovWasserstein (GW) distance is derived from optimal transport (OT) theory. The interest of OT lies both in its ability to provide relationships, connections, between sets of points and distances between probability distributions. By modeling graphs as probability distributions GW has become an important tool in many ML tasks involving structured data. Based on GW as a fidelity term, we proposed in 34 an efficient graph dictionary learning algorithm that allows to describe graphs as a simple composition of smaller graphs (atoms of the dictionary). We proposed a stochastic algorithm capable of learning a dictionarylike representation in the complex setting where the graphs in the dataset arrive progressively in time. We showed that these representations are particularly efficient for tasks such as change detection for structured data and clustering of graphs. We proposed an alternative approach in 48 which goal is to learn a single graph of large size whose subgraphs will best match (according to the GW criterion) the graphs of the dataset.
In another line of works, in collaboration with Clément Bonet (PhD student, IRISA, Université BretagneSud), Nicolas Courty, François Septier (LMBA, Université de Bretagne Sud) and Lucas Drumetz ( LabSTICC OSE, IMT Atlantique), we proposed an extension of the GW framerwork for shape matching problems 11. It consists in finding an optimal plan between the measures projected on a wisely chosen subspace and then completing it in a nearly optimal transport plan on the whole space. The advantage is to lower the computational complexity of the GW distance.
8.1.2 Diffused Wasserstein Distance for Optimal transport between attributed graphs
Participants: Paulo Gonçalves, Rémi Gribonval, Amélie Barbe, Titouan Vayer.
This work is a collaboration with Pierre Borgnat (CNRS) from the the Physics Lab of ENS de Lyon, Marc Sebban, Professor at the LabHC of University Jean Monet, and Sibylle Marcotte (student at ENS de Rennes).
In a series of recent articles, we proposed the Diffusion Wasserstein (DW) distance, a generalization of the standard Wasserstein to undirected and connected graphs where nodes are described by feature vectors. Using the heat diffusion equation constructed on the exponential kernel of the graph's Laplacian, we locally average the attributes of the nodes over a neighborhood that is controlled by the diffusion time. Like the fused GromovWasserstein distance, this mixed distance allows to compute an optimal transport plan that captures both the structural and the feature information of the graphs. A big advantage of the Diffusion Distance though, is its computational cost that remains significantly inferior to that of the fused Gromov Wasserstein distance. Moreover, applied to different domain adaptation tasks, we experimentally showed that in many difficult situations, the DW distance was able to outperform the most recent concurrent methods.
To further reduce the computational cost of the diffusion Wasserstein distance, we proposed to use a Chebyshev approximation of the diffusion operator applied to the features vectors. In the course of this work, we were also able to tighten the theoretical approximation bounds, which in turn permits to significantly improve estimates of the polynomial order for a prescribed error 31.
Finally, to address the classical problem of tuning the diffusion time, the unique free parameter of DW distance, we devised a triplet loss based method that permits to find the best diffusion time in the context of domain adaptation tasks 27.
8.2 Sparse deep neural networks : theory and algorithms
8.2.1 Mathematics of deep learning: approximation theory, scaleinvariance, and regularization
Participants: Rémi Gribonval, Pierre Stock, Antoine Gonon, Elisa Riccietti, Vincent Schellekens.
Collaborations with Facebook AI Research, Paris, with Nicolas Brisebarre (ARIC team, ENS de Lyon), and with Yann Traonmilin (IMB, Bordeaux) and Samuel Vaiter (JAD, Dijon)
Our paper studying the expressivity of sparse deep neural networks from an approximation theoretic perspective and highlighting the role of depth to enable efficient approximation of functions with very limited smoothness was published this year 8. Motivated by the importance of quantizing networks besides pruning them to achieve sparsity, we started to investigate the approximation theoretic properties of quantized deep networks, with the objective of defining and comparing the corresponding approximation classes with the unquantized ones.
Neural networks with the ReLU activation function are described by weights and bias parameters, and realized a piecewise linear continuous function. Natural scalings and permutations operations on the parameters leave the realization unchanged, leading to equivalence classes of parameters that yield the same realization. These considerations in turn lead to the notion of identifiability – the ability to recover (the equivalence class of) parameters from the sole knowledge of the realization of the corresponding network. We studied this problem in depth throught the lens of a new embedding of ReLU neural network parameters of any depth. The proposed embedding is invariant to scalings and provides a locally linear parameterization of the realization of the network.Leveraging these two key properties, we derived some conditions under which a deep ReLU network is indeed locally identifiable from the knowledge of the realization on a finite set of samples. We studied the shallow case in more depth, establishing necessary and sufficient conditions for the network to be identifiable from a bounded subset 22.
An important challenge in deep learning is to promote sparsity during the learning phase using a regularizer. In the classical setting of linear inverse problems, it is well known that the ${\ell}^{1}$ norm is a convex regularizer lending itself to efficient optimization and endowed with stable recovery guarantees.
A particular challenge is to understand to what extent using an ${\ell}^{1}$ penalty in this context is also wellfounded theoretically, and to possibly design alternate regularizers if possible. On the one hand, we started investigating the properties of minimizers of the ${\ell}^{1}$ norm in deep learning problems. On the other hand, we considered the abstract problem of recovering elements of a lowdimensional model set from underdetermined linear measurements. Considering the minimization of a convex regularizer subject to a data fit constraint, we explored the notion of a "best" convex regularizer given a model set. This was formalized as a regularizer that maximizes a compliance measure with respect to the model. Several notions of compliance were studied and analytical expressions were obtained for compliance measures based on the bestknown recovery guarantees with the restricted isometry property. This lead to a formal proof of the optimality of the ${\ell}^{1}$norm for sparse recovery and of the nuclear norm for lowrank matrix recovery for these compliance measures. We also investigated the construction of an optimal convex regularizer using the example of sparsity in levels 46.
8.2.2 Algorithms for quantized networks
Participants: Rémi Gribonval, Pierre Stock, Elisa Riccietti.
Collaboration with Facebook AI Research, Paris
From a more computational perspective, within the framework of the Ph.D. of Pierre Stock 40, we proposed last year a technique to drastically compress neural networks using product quantization 72, and this year an approach to learn networks that can be more efficiently quantized 33. We also started to study efficient optimization algorithms to train quantized networks that leverage multiple quantization levels.8.2.3 Deep sparse factorizations: hardness, algorithms and identifiability
Participants: Rémi Gribonval, Elisa Ricietti, Marion Foare, Léon Zheng, QuocTung Le.
Collaboration with Valeo AI, Paris
Matrix factorization with sparsity constraints plays an important role in many machine learning and signal processing problems such as dictionary learning, data visualization, dimension reduction.
Last year, from an algorithmic perspective, we analyzed and fixed a weakness of proximal algorithms in sparse matrix factorization. We also described a new tractable proximal operator called Generalized Hungarian Method, associated to socalled $k$regular matrices, which are useful for the factorization of a class of matrices associated to fast linear transforms. We further illustrated the effectiveness of our proposals by numerical experiments on the Hadamard Transform and magnetoencephalography matrix factorization. This work was published this year in a conference 29, and the new proximal operator was implemented in the FA$\mu $ST software library (see Section 7).
From a theoretical perspective, we considered the hardness and uniqueness properties of sparse matrix factorization. First, even with only two factors and a fixed, known support, we showed that optimizing the coefficients of the sparse factors can be an NPhard problem. Besides, we studied the landscape of the corresponding optimization problem and exhibited "easy" instances where the problem can be solved to global optimality with an algorithm demonstrated to be orders of magnitude faster than classical gradient based methods 43. In complement, we investigated the essential uniqueness of sparse matrix factorizations, both with two factors 50 and in a multilayer setting 49. We combined these results with a focus on socalled butterfly supports to achieve a multilayer sparse factorization algorithm able to learn fast transforms essentially at the cost of a single matrixvector multiplication, with exact recovery guarantees 30. A first version of the corresponding algorithm was incorporated in the FA$\mu $ST software library (see Section 7) and is subject to software optimizations to further speed it up.
8.3 Statistical learning, dimension reduction, and privacy preservation
8.3.1 Theoretical and algorithmic foundations of compressive learning: sketches, kernels, and optimal transport
Participants: Rémi Gribonval, Titouan Vayer, Ayoub Belhadji, Vincent Schellekens, Luc Giffon, Léon Zheng.
Collaborations with Gilles Blanchard (Univ. ParisSaclay), Yann Traonmilin (IMB, Bordeaux),Laurent Jacques and Vincent Schellekens (U. Louvain, Belgium), Nicolas Keriven (GIPSAlab, Grenoble), Phil Schniter (Ohio State Univ.), and with Valeo AI
The compressive learning framework proposes to deal with the large scale of datasets by compressing them into a single vector of generalized random moments, called a sketch, from which the learning task is then performed. Our papers establishing statistical guarantees on the generalization error of this procedure, first in a general abstract setting illustrated on PCA 6, then for the specific case of compressive $k$means and compressive Gaussian Mixture Modeling 16, were published this year. A tutorial paper on the principle and the main guarantees of compressive learning was also finalized and published this year 7.
Theoretical guarantees in compressive learning fundamentally rely on comparing certain metrics between probability distributions. This year we established some conditions under which the Wasserstein distance can be controlled by Maximum Mean Discrepancy (MMD) norms, which are defined using reproducing kernel Hilbert spaces. Based on the relations between the MMD and the Wasserstein distance, we provide new guarantees for compressive statistical learning by introducing and studying the concept of Wasserstein learnability of the learning task 47.
Dimension reduction in compressive learning exploits the ability to approximate certain kernels by finite dimensional quadratures. We studied a quadrature, proposed by Ermakov and Zolotukhin in the sixties, through the lens of kernel methods. The nodes of this quadrature rule follow the distribution of a determinantal point process, while the weights are defined through a linear system, similarly to the optimal kernel quadrature. We showed how these two classes of quadrature are related, and we proved a tractable formula of the expected value of the squared worstcase integration error on the unit ball of an RKHS of the former quadrature. In particular, this formula involves the eigenvalues of the corresponding kernel and leads to improving on the existing theoretical guarantees of the optimal kernel quadrature with determinantal point processes 28.
From a more empirical perspective, we pursued our efforts to make sketching for compressive learning more versatile and efficient. This notably involved exploring how to adapt the sketching pipeline to exploit optical processing units (OPUs) for energyefficient fast random projection, and investigating the ability to exploit sketching in largescale deep selfsupervised learning scenarios.
Finally, making the connection between graph learning and sketching methods, we have recently started to study the practical possibility and theoretical limitations of using a sketching technique to estimate the precision matrix involved in the Graphical Lasso algorithm.
8.3.2 Privacy preservation
Participants: Rémi Gribonval, Clément Lalanne.
Collaborations with Aurélien Garivier (UMPA, ENS de Lyon) and SARUS, Paris; and with Laurent Jacques and Vincent Schellekens (U. Louvain, Belgium), Florimond Houssiau and YvesAlexandre de Montjoye (Imperial College, London)
In the context of the Ph.D. thesis of Antoine Chatalic (in the PANAMA team in Rennes, defended last year) we showed 13 that a simple perturbation of the sketching mechanism with additive noise is sufficient to satisfy differential privacy, a wellestablished formalism for defining and quantifying the privacy of a random mechanism. We combined this with a feature subsampling mechanism, which reduces the computational cost without damaging privacy. The framework was applied to the tasks of Gaussian modeling, kmeans clustering and principal component analysis (PCA), for which sharp privacy bounds were derived. Empirically, the quality (for subsequent learning) of the compressed representation produced by this mechanism is strongly related with the induced noise level, for which we gave analytical expressions.
This year we also addressed problem of differentially private estimation of multiple quantiles (MQ) of a dataset, a key building block in modern data analysis. We showed how to implement the nonsmoothed Inverse Sensitivity (IS) mechanism for this specific problem and established that the resulting method is closely related to the recent JointExp algorithm, sharing in particular the same computational complexity and a similar efficiency. We also identified pitfalls of the two approaches on certain peaked distributions, and proposed a fix Numerical experiments showed that the empirical efficiency of the resulting algorithms is similar to the nonsmoothed methods for nondegenerate datasets, but orders of magnitude better on real datasets with repeated values.
8.4 Largescale convex and and nonconvex optimization
Participants: Elisa Riccietti, Paulo Gonçalves, Federico Grillini, Giovanni Seraghiti, Guillaume Lauga.
Collaboration with Nelly Pustelnik (CNRS, ENS de Lyon)
In the context of the Ph.D. work of Guillaume Lauga and the previous internships of Federico Grillini and Giovanni Seraghiti, this year we started to study the combination of proximal methods and multiresolution analysis in largescale image denoising problems. The use of multiresolution schemes, such as wavelets transforms, is not new in imagining and is widely used to define regularization strategies. We studied the use of such techniques to a wider extent, as a solution to accelerate proximal algorithms usually used for their solution and make them usable for problems of very large dimensions. In the fashion of multilevel gradient methods 3, popular techniques in smooth optimization, we designed multilevel versions of proximal algorithms employing wavelet transforms as transfer operators.In the context of the internship of Hugo Gouttenegre, we also pursued the investigations in 3D MRI superresolution using nonconvex optimization models. We provided a 3D extension of the Discrete MumfordShah, allowing to jointly perform a 3D superresolution and a segmentation of the highresolution volume. New phantom acquisitions were conducted, including a highresolution groundtruth volume, to evaluate the quantitative performances of this approach. A numerical toolbox is under construction.
9 Bilateral contracts and grants with industry
9.1 Bilateral grants with industry

CIFRE contract with Facebook Artificial Intelligence Research, Paris on Deep neural networks for large scale learning
Participants: Rémi Gribonval, Pierre Stock.
Duration: 3 years (20182021)Partners: Facebook Artificial Intelligence Research, Paris; InriaGrenoble
Funding: Facebook Artificial Intelligence Research, Paris; ANRT
The overall objective of this thesis 40 was to design, analyze and test large scale machine learning algorithms with applications to computer vision and natural language processing. A major challenge was to design compression techniques able to replace complex and deep neural networks with much more compact ones while preserving the capacity of the initial network to achieve the targeted task.

CIFRE contract with Valeo AI, Paris on Frugal learning with applications to autonomous vehicles
Participants: Rémi Gribonval, Elisa Riccietti, Léon Zheng.
Duration: 3 years (20212024)Partners: Valeo AI, Paris; ENS de Lyon
Funding: Valeo AI, Paris; ANRT
Context: Chaire IA AllegroAssai 10.1.1
The overall objective of this thesis is to develop machine learning methods exploiting lowdimensional sketches and sparsity to address perceptionbased learning tasks in the context of autonomous vehicles.

Funding from Facebook Artificial Intelligence Research, Paris
Participants: Rémi Gribonval.
Duration: 4 years (20212024)Partners: Facebook Artificial Intelligence Research, Paris; ENS de Lyon
Funding: Facebook Artificial Intelligence Research, Paris
Context: Chaire IA AllegroAssai 10.1.1
This is supporting the research conducted in the framework of the Chaire IA AllegroAssai.
10 Partnerships and cooperations
10.1 National initiatives
10.1.1 ANR IA Chaire : AllegroAssai
Participants: Rémi Gribonval [correspondant], Paulo Gonçalves, Elisa Ricietti, Marion Foare, Mathurin Massias, Léon Zheng, QuocTung Le, Antoine Gonon, Titouan Vayer, Ayoub Belhadji, Luc Giffon, Clement Lalanne.
Duration of the project: 2020  2024.
AllegroAssai focuses on the design of machine learning techniques endowed both with statistical guarantees (to ensure their performance, fairness, privacy, etc.) and provable resourceefficiency (e.g. in terms of bytes and flops, which impact energy consumption and hardware costs), robustness in adversarial conditions for secure performance, and ability to leverage domainspecific models and expert knowledge. The vision of AllegroAssai is that the versatile notion of sparsity, together with sketching techniques using random features, are key in harnessing these fundamental tradeoffs. The first pillar of the project is to investigate sparsely connected deep networks, to understand the tradeoffs between the approximation capacity of a network architecture (ResNet, Unet, etc.) and its “trainability” with provablygood algorithms. A major endeavor is to design efficient regularizers promoting sparsely connected networks with provable robustness in adversarial settings. The second pillar revolves around the design and analysis of provablygood endtoend sketching pipelines for versatile and resourceefficient largescale learning, with controlled complexity driven by the structure of the data and that of the task rather than the dataset size.
10.1.2 ANR DataRedux
Participants: Paulo Gonçalves [correspondant], Rémi Gribonval, Marion Foare, Israel Campero Jurado.
Duration of the project: February 2020  January 2024.
DataRedux puts forward an innovative framework to reduce networked data complexity while preserving its richness, by working at intermediate scales (“mesoscales”). Our objective is to reach a fundamental breakthrough in the theoretical understanding and representation of rich and complex networked datasets for use in predictive datadriven models. Our main novelty is to define network reduction techniques in relation with the dynamical processes occurring on the networks. To this aim, we will develop methods to go from data to information and knowledge at different scales in a humanaccessible way by extracting structures from highresolution, diverse and heterogeneous data. Our methodology will involve the identification of the most relevant subparts of timeresolved datasets while remapping the remaining parts of the system, the simultaneous structuraltemporal representations of timevarying networks, the development of parsimonious data representations extracting meaningful structures at mesoscales (“mesostructures”), and the building of models of interactions that include mesostructures of various types. Our aim is to identify data aggregation methods at intermediate scales and new types of data representations in relation with dynamical processes, that carry the richness of information of the original data, while keeping their most relevant patterns for their manageable integration in datadriven numerical models for decision making and actionable insights.
10.1.3 ANR Darling
Participants: Paulo Gonçalves [correspondant], Rémi Gribonval, Marion Foare.
Duration of the project: February 2020  January 2024.
This project meets the compelling demand of developing a unified framework for distributed knowledge extraction and learning from graph data streaming using innetwork adaptive processing, and adjoining powerful recent mathematical tools to analyze and improve performances. The project draws on three major parallel directions of research: network diffusion, signal processing on graphs, and random matrix theory which DARLING aims at unifying into a holistic dynamic network processing framework. Signal processing on graphs has recently provided a comprehensive set of basic instruments allowing for signal on graph filtering or sampling, but it is limited to static signal models. Network diffusion on the opposite inherently assumes models of time varying graphs and signals, and has pursued the path of proposing and understanding the performance of distributed dynamic inference on graphs. Both areas are however limited by their assuming either deterministic graph or signal models, thereby entailing often inflexible and difficulttograsp theoretical results. Random matrix theory for random graph inference has taken a parallel road in explicitly studying the performance, thereby drawing limitations and providing directions of improvement, of graphbased algorithms (e.g., spectral clustering methods). The ambition of DARLING lies in the development of network diffusiontype algorithms anchored in the graph signal processing lore, rather than heuristics, which shall systematically be analyzed and improved through random matrix analysis on elementary graph models. We believe that this original communion of as yet remote areas has the potential to path the pave to the emergence of the critically needed future field of dynamical network signal processing.
10.1.4 GDR ISIS project MOMIGS
Participants: Elisa Riccietti [correspondant], Marion Foare, Trieu Vy Le Hoang, Paulo Gonçalves.
Duration of the project: September 2021  September 2023.
This project focuses on large scale optimization problems in signal processing and imaging. A natural way to tackle them is to exploit their underlying structure, and to represent them at different resolution levels. The use of multiresolution schemes, such as wavelets transforms, is not new in imaging and is widely used to define regularization strategies. However, such techniques could be used to a wider extent, in order to accelerate the optimization algorithms used for their solution and to tackle large datasets. Techniques based on such ideas are usually called multilevel optimization methods and are wellknown and widely used in the field of smooth optimization and especially in the solution of partial differential equations. Optimization problems arising in image reconstruction are however usually nonsmooth and thus solved by proximal methods. Such approaches are efficient for smallscale problems but still computationally demanding for problems with very highdimensional data. The ambition of this project is thus to combine proximal methods and multiresolution analysis not only as a regularization, but as a solution to accelerate proximal algorithms.
10.2 Regional initiatives
10.2.1 Labex CominLabs LeanAI
Participants: Elisa Riccietti [correspondant], Rémi Gribonval.
Duration of the project: October 2021December 2024.
Collaboration with SilviuIoan Filip and Olivier Sentieys (IRISA, Rennes), Anastasia Volkova (LS2N Nantes)
The LeanAI project aims at developing a comprehensive and flexible framework for mixedprecision optimization. The project is motivated by the increasing demand for intelligent edge devices capable of onsite learning, driven by the recent developments in deep learning. The realization of such systems is a massive challenge due to the limited resources available in an embedded context and the massive training costs for stateoftheart deep neural networks. In this project we attack these problems at the arithmetic and algorithmic levels by exploring the design of new mixed numerical precision algorithms, energyefficient and capable of offering increased performance in a resourcerestricted environment. The ambition of the project is to develop more flexible and faster techniques than existing reducedprecision gradient algorithms, by determining the best numeric formats to be used in combination with this kind of methods, rules to dynamically adjust the precision and extension of such techniques to secondorder and multilevel strategies.
10.2.2 Labex Emerging Topics
Participants: Marion Foare [correspondant].
Duration of the project: April 2019December 2022.
Collaboration with Eric Van Reeth (Creatis, Lyon)
Magnetic Resonance Imaging (MRI) is an extremely important anatomical and functional imaging technique, widely used by physicists to establish medical diagnosis. Acquiring high resolution volumes is desirable in many clinical and preclinical applications to accurately adapt the treatment to the measurements, or simply obtain highly resolved images of small anatomical structures. However, directly acquiring highresolution volumes implies: i) long scanning times, which are often not tolerated by patients and children, and ii) images with low signaltonoise ratio. Therefore, it is of particular interest to quickly acquire lowresolution volumes, and enhance their resolution as a postprocessing step.
This project aims at developing new techniques to build superresolution images for 3D MRI, that can take into account more physical constraints, such as prior medical knowledge, and to derive efficient machine learning algorithms suited for large scale data, with theoretical guarantees. In particular, we explore specialized piecewise smooth reconstruction variational methods, like the MumfordShah (MS) and the Total Variation (TV) variants, and to adapt their fitting terms as well as their optimization algorithms. The main originality of this project is to combine resolution enhancement and segmentation in MRI (usually performed as two distinct postprocessing steps), starting from the MS model, a seminal tool originally designed for image denoising and segmentation tasks. This approach will improve the quality of the reconstruction both in terms of sharpness and smoothness, and help the doctors with reaching a diagnosis.
11 Dissemination
Participants: Rémi Gribonval, Paulo Gonçalves, Marion Foare, Elisa Riccietti.
11.1 Promoting scientific activities
11.1.1 Scientific events: organisation
Member of the organizing committees
 Rémi Gribonval, Journées de Statistiques 2022, Lyon
11.1.2 Scientific events: selection
Member of the conference program committees
 Rémi Gribonval, GRETSI 2022.
 Rémi Gribonval, 10th SMAISIGMA conference on Curves and Surfaces
 Rémi Gribonval, 2022 Spring School on Machine Learning (EPIT22), CIRM, Spring 2022
 Rémi Gribonval, MiLYON Spring School on Machine Learning, SaintEtienne, Spring 2021 (postponed to 2022 then cancelled due to Covid19)
 Rémi Gribonval, Conference on Mathematics for Audio and Music Signal Processing, CIRM 2021 (cancelled due to Covid19)
11.1.3 Journal
Member of the editorial boards
 Rémi Gribonval: Associate Editor for Constructive Approximation (Springer), Senior Area Editor for the IEEE Transactions on Signal Processing
11.1.4 Invited talks
 R. Gribonval was a keynote speaker at the international conference EUSIPCO 2021 and an invited speaker at the national conference CAp21.
 E. Riccietti was an invited speaker at 13th JLESC Workshop.
11.1.5 Leadership within the scientific community
 Rémi Gribonval is a member of the Scientific Committee of RT MIA (formerly GDR MIA)
 Rémi Gribonval is a member of the Comité de Liaison SIGMASMAI
 Rémi Gribonval is a member of the Cellule ERC of INS2I, mentoring for ERC candidates in the STIC domain
11.1.6 Scientific expertise
 Rémi Gribonval is a member of the Scientific Advisory Board (vicepresident) of the Acoustics Research Institute of the Austrian Academy of Sciences, and a member of the Commission Prospective of Institut de Mathématiques de Marseille
 Rémi Gribonval, member of the EURASIP Special Area Team (SAT) on Signal and Data Analytics for Machine Learning (SiGDML) since 2015.
11.1.7 Research administration
 Paulo Gonçalves is Deputy Scientific Director of the new research center of Inria in Lyon.
11.2 Teaching  Supervision  Juries
11.2.1 Teaching
 Master :
 Rémi Gribonval: Inverse problems and high dimension; Mathematical foundations of deep neural networks; Concentration of measure in probability and highdimensional statistical learning; M2, ENS Lyon
 Engineer cycle (Bac+3 to Bac+5):
 Paulo Gonçalves: Traitement du Signal (déterministe, aléatoire, numérique), Estimation statistique. 80 heures Eq. TD. CPE Lyon, France
 Marion Foare: Traitement du Signal (déterministe, numérique, aléatoire), Traitement et analyse d'images, Optimisation, Compression, Projets. 280 heures Eq. TD. CPE Lyon, France
 Elisa Riccietti: M1 course Optimization and Approximation (28h) and 19h of tutor responsibility at ENS Lyon
11.2.2 Supervision
All PhD students of the team are cosupervised by at least one team member. In addition, some team members are involved in the cosupervision of students hosted in other labs.
 Marion Foare is involved in the cosupervision of the Ph.D. of Hoang Trieu Vy Le since 2021 (Laboratoire de Physique, Lyon).
 Elisa Riccietti is involved in the cosupervision of the Ph.D. of Valentin Mercier since 2021 (IRIT, Toulouse).
The following PhDs were defended in DANTE in 2021:
 Pierre Stock, Université de Lyon 40 (funded by ANRT and Facebook Artificial Intelligence Research; cosupervisors Rémi Gribonval and Hervé Jégou), Efficiency and Redundancy in Deep Learning Models : Theoretical Considerations and Practical Applications, April 2021
 Amélie Barbe, Université de Lyon (funded by ACADEMICS project, IdexLyon; cosupervisors Paulo Gonçalves, Pierre Borgnat and Marc Sebban), DiffusionWasserstein distances for attributed graphs, December 2021
11.2.3 Juries
Members of the DANTE team participated to the following juries
 PhD juries: Alexandre Araujo (Université Paris IX Dauphine, member); Marina Kremé (AixMarseille Université, chair); Pierre Humbert (University ParisSaclay, chair); Raphaël Truffet (Université de Rennes I, chair); Vincent Schellekens (Université Catholique de Louvain, reviewer), PhD defence session at University of Florence (member)
12 Scientific production
12.1 Major publications

1
article
${L}^{}$ PageRank for SemiSupervised Learning.Applied Network Science4572019, 120  2 miscImplicit differentiation for fast hyperparameter selection in nonsmooth convex learning.May 2021
 3 articleOn a multilevel Levenberg–Marquardt method for the training of artificial neural networks and its application to the solution of partial differential equations.Optimization Methods and Software2020, 126
 4 articleSemiLinearized Proximal Alternating Minimization for a Discrete MumfordShah Model.IEEE Transactions on Image ProcessingOctober 2019, 113
 5 articleTranslation on Graphs: An Isometric Shift Operator.IEEE Signal Processing Letters2212December 2015, 24162420
 6 articleCompressive Statistical Learning with Random Feature Moments.Mathematical Statistics and LearningMain novelties between version 1 and version 2: improved concentration bounds, improved sketch sizes for compressive kmeans and compressive GMM that now scale linearly with the ambient dimensionMain novelties of version 3: all content on compressive clustering and compressive GMM is now developed in the companion paper hal02536818; improved statistical guarantees in a generic framework with illustration of the improvements on compressive PCA2021
 7 articleSketching Data Sets for LargeScale Learning: Keeping only what you need.IEEE Signal Processing Magazine385September 2021, 1236
 8 articleApproximation spaces of deep neural networks.Constructive Approximation2020
 9 articleDual Extrapolation for Sparse Generalized Linear Models.Journal of Machine Learning Research21234October 2020, 133
 10 articleFourier could be a Data Scientist: from Graph Fourier Transform to Signal Processing on Graphs.Comptes Rendus. PhysiqueSeptember 2019, 474488
12.2 Publications of the year
International journals
 11 articleSubspace Detours Meet GromovWasserstein.Algorithms14December 2021, 129
 12 articleAssigning Channels in WLANs with Channel Bonding: A Fair and Robust Strategy.Computer NetworksJune 2021, 117
 13 articleCompressive Learning with Privacy Guarantees.Information and Inference2021
 14 articleSparsitybased audio declipping methods: selected overview, new algorithms, and largescale evaluation.IEEE/ACM Transactions on Audio, Speech and Language Processing292021, 11741187
 15 articleCompressive Statistical Learning with Random Feature Moments.Mathematical Statistics and Learning32August 2021, 113–164
 16 articleStatistical Learning Guarantees for Compressive Clustering and Compressive Mixture Modeling.Mathematical Statistics and Learning32August 2021, 165–257
 17 articleSketching Data Sets for LargeScale Learning: Keeping only what you need.IEEE Signal Processing Magazine385September 2021, 1236
 18 articleApproximation spaces of deep neural networks.Constructive Approximation2021
 19 articleA distributed antenna orientation solution for optimizing communications in a fleet of UAVs.Computer Communications181January 2022, 102115
 20 articleCovert Cycle Stealing in a Single FIFO Server.ACM Transactions on Modeling and Performance Evaluation of Computing Systems2021, 135
 21 articleOnedimensional Service Networks and Batch Service Queues.Queueing Systems2021
 22 articleAn Embedding of ReLU Networks and an Analysis of their Identifiability.Constructive Approximation2022
 23 articleA Markov Model for Performance Evaluation of Channel Bonding in IEEE 802.11.Ad Hoc Networks2021
 24 articleDynamics of cascades on burstinesscontrolled temporal networks.Nature Communications121December 2021, 19
 25 articleOn the Stochastic Analysis of a Quantum Entanglement Distribution Switch.IEEE Transactions on Quantum EngineeringFebruary 2021
International peerreviewed conferences
 26 inproceedingsUse of a Weighted Conflict Graph in the Channel Selection Operation for WiFi Networks.WONS 2021  16th Wireless Ondemand Network systems and Services ConferenceVirtual Conference, FranceMarch 2021, 14
 27 inproceedingsOptimization of the Diffusion Time in Graph DiffusedWasserstein Distances: Application to Domain Adaptation.ICTAI 2021  33rd IEEE International Conference on Tools with Artificial IntelligenceVirtual conference, FranceIEEENovember 2021, 18
 28 inproceedingsAn analysis of ErmakovZolotukhin quadrature using kernels.NeurIPS 2021  35th Conference on Neural Information Processing SystemsVirtualonly Conference, AustraliaDecember 2021, 117
 29 inproceedingsStructured Support Exploration For Multilayer Sparse Matrix Factorization.ICASSP 2021  IEEE International Conference on Acoustics, Speech and Signal ProcessingToronto, Ontario, CanadaIEEEJune 2021, 15
 30 inproceedingsFast learning of fast transforms, with guarantees.ICASSP 2022  IEEE International Conference on Acoustics, Speech and Signal ProcessingSingapore, SingaporeMay 2022
 31 inproceedingsFast Multiscale Diffusion on Graphs.ICASSP, IEEE International Conference on Acoustics, Speech and Signal ProcessingSingapore, SingaporeMay 2022
 32 inproceedingsAutomated and Reproducible Application Traces Generation for IoT Applications.Q2SWinet 2021  17th ACM Symposium on QoS and Security for Wireless and Mobile NetworksAlicante, SpainACMNovember 2021, 18
 33 inproceedingsTraining with Quantization Noise for Extreme Model Compression.International Conference on Learning Representations 2021Vienna, AustriaMay 2021
 34 inproceedingsOnline Graph Dictionary Learning.ICML 2021  38th International Conference on Machine LearningVirtual Conference, United States2021
Conferences without proceedings
 35 inproceedingsImproving the Spatial Reuse in IEEE 802.11ax WLANs: A MultiArmed Bandit Approach.MSWiM'21  24th ACM Conference on Modeling, Analysis and Simulation of Wireless and Mobile SystemsAlicante, SpainACMNovember 2021
 36 inproceedingsExtension des Modèles de Flocking aux Environnements avec Obstacles et Communications Dégradées.JFSMABordeaux, FranceJune 2021
 37 inproceedingsExtension of Flocking Models to Environments with Obstacles and Degraded Communications.IROS 2021  IEEE/RSJ International Conference on Intelligent Robots and SystemsPrague / Virtual, Czech RepublicIEEEJuly 2021, 17
 38 inproceedingsTowards a Throughput and Energy Efficient Association Strategy for WiFi/LiFi Heterogeneous Networks.PEWASUN 2021  18th ACM International Symposium on Performance Evaluation of Wireless Ad Hoc, Sensor, and Ubiquitous NetworksAlicante, SpainACMNovember 2021
Doctoral dissertations and habilitation theses
 39 thesisFrom WiFi Performance Evaluation to Controlled Mobility in Drone Networks.Université Claude Bernard Lyon 1January 2021
 40 thesisEfficiency and Redundancy in Deep Learning Models : Theoretical Considerations and Practical Applications.Université de LyonApril 2021
Reports & preprints
 41 miscSketching Datasets for LargeScale Learning (long version).January 2021
 42 miscCovert Cycle Stealing in a Single FIFO Server (extended version).May 2021
 43 miscSpurious Valleys, Spurious Minima and NPhardness of Sparse Matrix Factorization With Fixed Support.May 2021
 44 miscAnalysis of a Tripartite Entanglement Distribution Switch.January 2022
 45 miscNonsmooth convex optimization to estimate the Covid19 reproduction number spacetime evolution with robustness against low quality data.September 2021
 46 miscA theory of optimal convex regularization for lowdimensional recovery.December 2021
 47 miscControlling Wasserstein distances by Kernel norms with application to Compressive Statistical Learning.December 2021
 48 miscSemirelaxed Gromov Wasserstein divergence with applications on graphs.October 2021
 49 miscEfficient Identification of Butterfly Sparse Matrix Factorizations.February 2022
 50 miscIdentifiability in TwoLayer Sparse Matrix Factorization.November 2021
12.3 Other
Softwares
 51 softwareCode for the paper "Structured Support Exploration For Multilayer Sparse Matrix Factorization".February 2022BSD3 Clause License
 52 softwareCode for reproducible research  Fast Multiscale Diffusion on Graphs.February 2022BSD 3Clause License
 53 softwareCode for reproducible research  Fast learning of fast transforms, with guarantees.February 2022BSD 3Clause License
12.4 Cited publications
 54 articleSpatial and temporal regularization to estimate COVID19 reproduction number R(t): Promoting piecewise smoothness via convex optimization.PLoS ONE158August 2020, e0237901
 55 bookConvex analysis and monotone operator theory in Hilbert spaces.408Springer2011

56
bookH.Holger BocheR.Robert CalderbankG.Gitta KutyniokJ.Jan VybiralCompressed Sensing and its Applications.Series: Applied and Numerical Harmonic AnalysisMATHEON Workshop 2013ISSN: 22965009
note that you have the right to download and disseminate single chapters from the book that are authored by you and that are created and provided by Springer only for your private and professional noncommercial research and classroom use (e.g. sharing the chapter by mail or in hardcopy form with research colleagues for their professional noncommercial research and classroom use, or to use it for presentations or handouts for students). You are also entitled to use single chapters for the further development of your scientific career (e.g. by copying and attaching chapters to an electronic or hardcopy job or grant application). If you are an editor, book author or chapter author, please ask the (co)author(s) of the respective individual chapter for approval before you share it with other scientists since sharing chapters requires the prior consent of any coauthor(s) of the chapter. Posting of the book or a chapter on your homepage or deposit on repositories of third parties is not allowed.ChamBirkhäuser, Cham2015, URL: http://books.google.cz/books?id=6KoYCgAAQBAJ&pg=PA340&dq=intitle:Compressed+Sensing+and+its+Applications&hl=&cd=1&source=gbs_api  57 articleExact Reconstruction using Beurling Minimal Extrapolation.arXiv.orgarXiv: 1103.4951v2March 2011, URL: http://arxiv.org/abs/1103.4951v2
 58 articleCompressive Learning with Privacy Guarantees.Information and Inference2021
 59 incollectionProximal splitting methods in signal processing.Fixedpoint algorithms for inverse problems in science and engineeringSpringer2011, 185212
 60 articleDistributed Adaptive Learning of Graph Signals .IEEE Transaction on Signal Processing65162017
 61 bookCooperative and Graph Signal Processing: Principle and Applications.Academic Press2018
 62 bookSparse and Redundant Representations.From Theory to Applications in Signal and Image ProcessingSpringer2010, URL: http://books.google.fr/books?id=d5b6lJI9BvAC&printsec=frontcover&dq=sparse+and+redundant+representations&hl=&cd=1&source=gbs_api
 63 bookA Mathematical Introduction to Compressive Sensing.New York, NYSpringer2013, URL: http://link.springer.com/10.1007/9780817649487
 64 articleSparse inverse covariance estimation with the graphical lasso .Biostatistics932008, 432441
 65 articleCompressive Statistical Learning with Random Feature Moments.Mathematical Statistics and Learning2021, URL: https://hal.inria.fr/hal01544609
 66 articleStructured Variable Selection with SparsityInducing Norms.Journal of Machine Learning Research12Publisher: Massachusetts Institute of Technology Press2011, 27772824URL: http://hal.inria.fr/inria00377732
 67 articleA unified Framework for Structured Graph Learning via Spectral Constraints .Journal of Machine Learning Research212020, 160

68
inproceedingsRandom features for largescale kernel machines.NIPSReplace implicit mapping of kernel trick by explicit nonlinear mapping from R
to R using *randomized* feature map approximating the kernel inner product with a finitedim inner product. Specialized to shiftinvariant kernels, with D = O(d eps⌃2 log 1/eps⌃2) for precision eps First randomized map: random sinusoids with frequency distribution = Fourier transform of kernel; Second map = random binning (not smooth) Claim 1 = uniform convergence of Fourier features in terms of Kernel inner product (not Kernel distance?), on compact subset M2007  69 articleSubsampled Newton methods.Math. Program.1742019, 293326
 70 articleThe Emerging Field of Signal Processing on Graphs .IEEE Signal Processing MagazineMay 2013, 8398
 71 articleHilbert Space Embeddings and Metrics on Probability Measures..JMLR11Theorem 21 relates Wasserstein metric to Kernel metric2010, 15171561URL: http://dblp.org/rec/journals/jmlr/SriperumbudurGFSL10
 72 inproceedingsAnd the Bit Goes Down: Revisiting the Quantization of Neural Networks.ICLR 2020  Eighth International Conference on Learning RepresentationsAddisAbeba, EthiopiaApril 2020, 111
 73 articleDictionary Learning.IEEE Signal Processing Magazine2822738URL: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5714407
 74 inproceedingsE2train: Training stateoftheart cnns with over 80% energy savings.Advances in Neural Information Processing Systems2019, 51385150
 75 inproceedingsSWALP: Stochastic weight averaging in low precision training.International Conference on Machine Learning2019, 70157024
 76 articleADAHESSIAN: An adaptive second order optimizer for machine learning.arXiv preprint arXiv:2006.007192020
 77 inproceedingsNonconvex Sparse Graph Learning under Laplacian Constrained Graphical Model.34th Conference on Neural Information Processing Systems2020