Keywords
Computer Science and Digital Science
 A3.3.3. Big data analysis
 A3.4. Machine learning and statistics
 A3.5.2. Recommendation systems
 A6.2. Scientific computing, Numerical Analysis & Optimization
 A8.2. Optimization
 A8.6. Information theory
 A8.12. Optimal transport
 A9.2. Machine learning
 A9.3. Signal analysis
Other Research Topics and Application Domains
 B1.1.4. Genetics and genomics
 B4. Energy
 B7.2.1. Smart vehicles
 B9.1.2. Serious games
 B9.5.3. Physics
 B9.5.5. Mechanics
 B9.5.6. Data science
 B9.6.10. Digital humanities
1 Team members, visitors, external collaborators
Research Scientists
 Marc Schoenauer [Team leader, Inria, Senior Researcher, HDR]
 Michele Alessandro Bucci [Inria, Starting Research Position]
 Guillaume Charpiat [Inria, Researcher]
 Alessandro Ferreira Leite [Inria, Advanced Research Position, from Feb 2021]
 Cyril Furtlehner [Inria, Researcher, HDR]
 Cécile Germain [Univ ParisSaclay, Emeritus, HDR]
 Flora Jay [CNRS, Researcher]
 Michele Sebag [CNRS, Senior Researcher, HDR]
 Paola Tubaro [CNRS, Senior Researcher, HDR]
Faculty Members
 Philippe Caillou [Univ ParisSaclay, Associate Professor]
 Aurélien Decelle [Univ ParisSaclay, Associate Professor, on leave to Univ. Madrid]
 Isabelle Guyon [Univ. ParisSaclay, Professor, Chaire Inria]
 Francois Landes [Univ ParisSaclay, Associate Professor]
PostDoctoral Fellows
 Olivier Bui [Inria]
 Shuyu Dong [Inria, from Oct 2021]
 Saumya Jetley [Inria, until Sep 2021]
 Tamon Nakano [Inria]
 Shiyang Yan [Inria, from Nov 2021]
PhD Students
 Eleonore Bartenlian [MESR, until Sep 2021]
 Victor Berger [Inria, until Oct 2021]
 Guillaume Bied [Univ ParisSaclay]
 Leonard Blier [Facebook, CIFRE]
 Tony Bonnaire [Univ ParisSaclay, Until Oct. 2021]
 Balthazar Donon [RTE, CIFRE]
 Victor Estrade [Inria, until May 2021]
 Loris Felardos  Saint Jean [Inria]
 Giancarlo Fissore [Univ ParisSaclay]
 Julien Girard [CEA, until Oct 2021]
 Armand Lacombe [Inria]
 Wenzhuo Liu [IRT System X]
 Zhengying Liu [Inria, until Nov 2021]
 Nizam Makdoud [Thalès, CIFRE, until July 2021]
 Emmanuel Menier [IRT System X]
 Matthieu Nastorg [Inria]
 Herilalaina Rakotoarison [Inria]
 Theophile Sanchez [Univ ParisSaclay]
 Vincenzo Schimmenti [CNRS]
 Nilo Schwencke [Univ ParisSaclay]
 Haozhe Sun [Univ ParisSaclay, from Feb 2021]
 Marion Ullmo [Univ ParisSaclay, until Dec 2021]
 Manon Verbockhaven [Univ ParisSaclay, Dec 2021]
 Elinor Wahal [ENS ParisSaclay, until Nov 2021]
 Assia Wirth [Univ ParisSaclay, from April 2021]
Technical Staff
 Victor Alfonso Naya [Univ ParisSaclay, Engineer]
 Adrien Pavao [Univ ParisSaclay, Engineer]
 Sebastien Treguer [Inria, Engineer]
Interns and Apprentices
 Ghania Benalioua [Inria, from Apr 2021 until Oct 2021]
 Remy Hosseinkhan [Inria, from May 2021 until Sep 2021]
 Jiangnan Huang [Inria, from Apr 2021 until Aug 2021]
 Pierre Jobic [ENS ParisSaclay, until Apr 2021]
 Mandie Joulin [Inria, until Feb 2021]
 Alice Lacan [Inria, from Apr 2021 until Aug 2021]
 Wenhao Li [Inria, from May 2021 until Aug 2021]
 Daniel Montoya Vasquez [Inria, from Apr 2021 until Jul 2021]
 Rafael Munoz Gomez [Inria, from Apr 2021 until Aug 2021]
 Manh Nguyen [Inria, from Apr 2021 until Aug 2021]
 Francesco Pezzicoli [Inria, from Feb 2021 until Jul 2021]
 Maria Romero Goldar [Inria, from May 2021 until Oct 2021]
 Hugo Sonnery [Inria, from Apr 2021 until Aug 2021]
 Michael Vaccaro [Inria, until Feb 2021]
 Manon Verbockhaven [Inria, from May 2021 until Nov 2021]
 Mathurin Videau [Inria, from Apr 2021 until Sep 2021]
 Alex Westbrook [Inria, from Apr 2021 until Jul 2021]
Administrative Assistant
 Maeva Jeannot [Inria]
External Collaborators
 Jean Cury [Univ ParisSaclay]
 Jeremy Guez [Univ de Provence]
 Mykola Liashuha [Univ ParisSaclay, until Apr 2021]
 Thibault Monsel [CentraleSupélec, from Jun 2021]
 Yann Ollivier [Facebook, HDR]
 Olivier Teytaud [Facebook, HDR]
 Burak Yelmen [Univ ParisSaclay, from Sep 2021]
2 Overall objectives
2.1 Presentation
Since its creation in 2003, TAO activities had constantly but slowly evolved, as old problems were being solved, and new applications arose, bringing new fundamental issues to tackle. But recent abrupt progresses in Machine Learning (and in particular in Deep Learning) have greatly accelerated these changes also within the team. It so happened that this change of slope also coincided with some more practical changes in TAO ecosystem: following Inria 12years rule, the team definitely ended in December 2016. The new team TAU (for TAckling the Underspecified) has been proposed, and formally created in July 2019. At the same time important staff changes took place, that also justify even sharper changes in the team focus. During the year 2018, the second year of this new era for the (remaining) members of the team, our research topics have now stabilized around a final version of the TAU project.
Following the dramatic changes in TAU staff during the years 20162017 (see the 2017 activity report of the team for the details), the research around continuous optimization has definitely faded out in TAU (while the research axis on hyperparameter tuning has focused on Machine Learning algorithms), the Energy application domain has slightly changed direction under Isabelle Guyon's supervision (Section 4.2), after the completion of the work started by Olivier Teytaud, and a few new directions have emerged, around the robustness of ML systems (Section 3.1.2). The other research topics have been continued, as described below.
3 Research program
3.1 Toward Good AI
As discussed by 155, and in the recent collaborative survey paper 131, the topic of ethical AI was nonexistent until 2010, was laughed at in 2016, and became a hot topic in 2017 as the AI disruptivity with respect to the fabric of life (travel, education, entertainment, social networks, politics, to name a few) became unavoidable 148, together with its expected impacts on the nature and amount of jobs. As of now, it seems that the risk of a new AI Winter might arise from legal1 and societal2 issues. While privacy is now recognized as a civil right in Europe, it is feared that the GAFAM, BATX and others can already capture a sufficient fraction of human preferences and their dynamics to achieve their commercial and other goals, and build a Brave New Big Brother (BNBB), a system that is openly beneficial to many, covertly nudging, and possibly dictatorial).
The ambition of Tau is to mitigate the BNBB risk along several intricated dimensions, and build i) causal and explainable models; ii) fair data and models; iii) provably robust models.
3.1.1 Causal modeling and biases
Participants: Isabelle Guyon, Michèle Sebag, Philippe Caillou, Paola Tubaro
The extraction of causal models, a long goal of AI 152, 126, 153, became a strategic issue as the usage of learned models gradually shifted from prediction to prescription in the last years. This evolution, following Auguste Comte's vision of science (Savoir pour prévoir, afin de pouvoir) indeed reflects the exuberant optimism about AI: Knowledge enables Prediction; Prediction enables Control. However, although predictive models can be based on correlations, prescriptions can only be based on causal models3.
Among the research applications concerned with causal modeling, predictive modeling or collaborative filtering at Tau are all projects described in section 4.1 (see also Section 3.4), studying the relationships between: i) the educational background of persons and the job openings (FUI project JobAgile and DataIA project Vadore); ii) the quality of life at work and the economic performance indicators of the enterprises (ISN Lidex project Amiqap) 127 ; iii) the nutritional items bought by households (at the level of granularity of the barcode) and their health status, as approximated from their bodymassindex (IRS UPSaclay Nutriperso); iv) the actual offer of restaurants and their scores on online rating systems. In these projects, a wealth of data is available (though hardly sufficient for applications ii), iii and iv))) and there is little doubt that these data reflect the imbalances and biases of the world as is, ranging from gender to racial to economical prejudices. Preventing the learned models from perpetuating such biases is essential to deliver an AI endowed with common decency.
In some cases, the bias is known; for instance, the cohorts in the Nutriperso study are more welloff than the average French population, and the Kantar database includes explicit weights to address this bias through importance sampling. In other cases, the bias is only guessed; for instance, the companies for which Secafi data are available hardly correspond to a uniform sample as these data have been gathered upon the request of the company trade union.
3.1.2 Robustness of Learned Models
Participants: Guillaume Charpiat, Marc Schoenauer, Michèle Sebag
Due to their outstanding performances, deep neural networks and more generally machine learningbased decision making systems, referred to as MLs in the following, have been raising hopes in the recent years to achieve breakthroughs in critical systems, ranging from autonomous vehicles to defense. The main pitfall for such applications lies in the lack of guarantees for MLs robustness.
Specifically, MLs are used when the mainstream software design process does not apply, that is, when no formal specification of the target software behavior is available and/or when the system is embedded in an open unpredictable world. The extensive body of knowledge developed to deliver guarantees about mainstream software $$ ranging from formal verification, model checking and abstract interpretation to testing, simulation and monitoring $$ thus does not directly apply either. Another weakness of MLs regards their dependency to the amount and quality of the training data, as their performances are sensitive to slight perturbations of the data distribution. Such perturbations can occur naturally due to domain or concept drift (e.g. due to a change in light intensity or a scratch on a camera lens); they can also result from intentional malicious attacks, a.k.a adversarial examples 171.
These downsides, currently preventing the dissemination of MLs in safetycritical systems (SCS), call for a considerable amount of research, in order to understand when and to which extent an MLs can be certified to provide the desired level of guarantees.
Julien Girard's PhD (CEA scholarship), defended in Dec. 2020 53, cosupervised by Guillaume Charpiat and Zakaria Chihani (CEA), is devoted to the extension of abstract interpretation to deep neural nets, and the formal characterization of the transition kernel from input to output space achieved by a DNN (robustness by design, coupled with formally assessing the coverage of the training set). This approach is tightly related to the inspection and opening of blackbox models, aimed to characterize the patterns in the input instances responsible for a decision – another step toward explainability.
3.2 Hybridizing numerical modeling and learning systems
Participants: Michele Alessandro Bucci, Guillaume Charpiat, Cécile Germain, Isabelle Guyon, Marc Schoenauer, Michèle Sebag
In sciences and engineering, human knowledge is commonly expressed in closed form, through equations or mechanistic models characterizing how a natural or social phenomenon, or a physical device, will behave/evolve depending on its environment and external stimuli, under some assumptions and up to some approximations. The field of numerical engineering, and the simulators based on such mechanistic models, are at the core of most approaches to understand and analyze the world, from solid mechanics to computational fluid dynamics, from chemistry to molecular biology, from astronomy to population dynamics, from epidemiology and information propagation in social networks to economy and finance.
Most generally, numerical engineering supports the simulation, and when appropriate the optimization and control4 of the phenomenons under study, although several sources of discrepancy might adversely affect the results, ranging from the underlying assumptions and simplifying hypotheses in the models, to systematic experiment errors to statistical measurement errors (not to mention numerical issues). This knowledge and knowhow are materialized in millions of lines of code, capitalizing the expertise of academic and industrial labs. These softwares have been steadily extended over decades, modeling new and more finegrained effects through layered extensions, making them increasingly harder to maintain, extend and master. Another difficulty is that complex systems most often resort to hybrid (pluridisciplinary) models, as they involve many components interacting along several time and space scales, hampering their numerical simulation.
At the other extreme, machine learning offers the opportunity to model phenomenons from scratch, using any available data gathered through experiments or simulations. Recent successes of machine learning in computer vision, natural language processing and games to name a few, have demonstrated the power of such agnostic approaches and their efficiency in terms of prediction 130, inverse problem solving 150, and sequential decision making 181, 182, despite their lack of any "semantic" understanding of the universe. Even before these successes, Anderson's claim was that the data deluge [might make] the scientific method obsolete74, as if a reasonable option might be to throw away the existing equational or software bodies of knowledge, and let Machine Learning rediscover all models from scratch. Such a claim is hampered among others by the fact that not all domains offer a wealth of data, as any academic involved in an industrial collaboration around data has discovered.
Another approach is considered in Tau, investigating how existing mechanistic models and related simulators can be partnered with ML algorithms: i) to achieve the same goals with the same methods with a gain of accuracy or time; ii) to achieve new goals; iii) to achieve the same goals with new methods.
Toward more robust numerical engineering: In domains where satisfying mechanistic models and simulators are available, ML can contribute to improve their accuracy or usability. A first direction is to refine or extend the models and simulators to better fit the empirical evidence. The goal is to finely account for the different biases and uncertainties attached to the available knowledge and data, distinguishing the different types of known unknowns. Such known unknowns include the model hyperparameters (coefficients), the systematic errors due to e.g., experiment imperfections, and the statistical errors due to e.g., measurement errors. A second approach is based on learning a surrogate model for the phenomenon under study that incorporate domain knowledge from the mechanistic model (or its simulation). See Section 8.5 for case studies.
A related direction, typically when considering blackbox simulators, aims to learn a model of the error, or equivalently, a postprocessor of the software. The discrepancy between simulated and empirical results, referred to as reality gap137, can be tackled in terms of domain adaptation 80, 110. Specifically, the source domain here corresponds to the simulated phenomenon, offering a wealth of inexpensive data, and the target domain corresponds to the actual phenomenon, with rare and expensive data; the goal is to devise accurate target models using the source data and models.
Extending numerical engineering: ML, using both experimental and numerical data, can also be used to tackle new goals, that are beyond the current stateoftheart of standard approaches. Inverse problems are such goals, identifying the parameters or the initial conditions of phenomenons for which the model is not differentiable, or amenable to the adjoint state method.
A slightly different kind of inverse problem is that of recovering the ground truth when only noisy data is available. This problem can be formulated as a search for the simplest model explaining the data. The question then becomes to formulate and efficiently exploit such a simplicity criterion.
Another goal can be to model the distribution of given quantiles for some system: The challenge is to exploit available data to train a generative model, aimed at sampling the target quantiles.
Examples tackled in TAU are detailed in Section 8.5. Note that the "Cracking the Glass Problem", described in Section 8.2.3 is yet another instance of a similar problem.
Datadriven numerical engineering: Finally, ML can also be used to sidestep numerical engineering limitations in terms of scalability, or to build a simulator emulating the resolution of the (unknown) mechanistic model from data, or to revisit the formal background.
When the mechanistic model is known and sufficiently accurate, it can be used to train a deep network on an arbitrary set of (space,time) samples, resulting in a meshless numerical approximation of the model 166, supporting by construction differentiable programming134.
When no mechanistic model is sufficiently efficient, the model must be identified from the data only. Genetic programming has been used to identify systems of ODEs 163, through the identification of invariant quantities from data, as well as for the direct identification of control commands of nonlinear complex systems, including some chaotic systems 99. Another recent approach uses two deep neural networks, one for the state of the system, the other for the equation itself 156. The critical issues for both approaches include the scalability, and the explainability of the resulting models. Such line of research will benefit from TAU unique mixed expertise in Genetic Programming and Deep Learning.
Finally, in the realm of signal processing (SP), the question is whether and how deep networks can be used to revisit mainstream feature extraction based on Fourier decomposition, wavelet and scattering transforms 87. E. Bartenlian's PhD (started Oct. 2018), cosupervised by M. Sebag and F. Pascal (CentraleSupélec), focusing on musical audiotoscore translation 165, inspects the effects of supervised training, taking advantage from the fact that convolution masks can be initialized and analyzed in terms of frequency.
3.3 Learning to learn
According to Ali Rahimi's test of times award speech at NIPS 17, the current ML algorithms have become a form of alchemy. Competitive testing and empirical breakthroughs gradually become mandatory for a contribution to be acknowledged; an increasing part of the community adopts trials and errors as main scientific methodology, and theory is lagging behind practice. This style of progress is typical of technological and engineering revolutions for some; others ask for consolidated and wellunderstood theoretical advances, saving the time wasted in trying to build upon hardly reproducible results.
Basically, while practical achievements have often passed the expectations, there exist caveats along three dimensions. Firstly, excellent performances do not imply that the model has captured what was to learn, as shown by the phenomenon of adversarial examples. Following Ian Goodfellow, some wellperforming models might be compared to Clever Hans, the horse that was able to solve mathematical exercizes using non verbal cues from its teacher 123; it is the purpose of Pillar I. to alleviate the Clever Hans trap (section 3.1).
Secondly, some major advances, e.g. related to the celebrated adversarial learning 116, 110, establish proofs of concept more than a sound methodology, where the reproducibility is limited due to i) the computational power required for training (often beyond reach of academic labs); ii) the numerical instabilities (witnessed as random seeds happen to be found in the codes); iii) the insufficiently documented experimental settings. What works, why and when is still a matter of speculation, although better understanding the limitations of the current state of the art is acknowledged to be a priority. After Ali Rahimi again, simple experiments, simple theorems are the building blocks that help us understand more complicated systems. Along this line, 146 propose toy examples to demonstrate and understand the defaults of convergence of gradient descent adversarial learning.
Thirdly, and most importantly, the reported achievements rely on carefully tuned learning architectures and hyperparameters. The sensitivity of the results to the selection and calibration of algorithms has been identified since the end 80s as a key ML bottleneck, and the field of automatic algorithm selection and calibration, referred to as AutoML or Auto$\u2606$ in the following, is at the ML forefront.
Tau aims to contribute to the ML evolution toward a more mature stage along three dimensions. In the short term, the research done in Auto$\u2606$ will be pursued (section 3.3.1). In the medium term, an information theoretic perspective will be adopted to capture the data structure and to calibrate the learning algorithm depending on the nature and amount of the available data (section 3.3.2). In the longer term, our goal is to leverage the methodologies forged in statistical physics to understand and control the trajectories of complex learning systems (section 3.3.3).
3.3.1 Auto*
Participants: Isabelle Guyon, Marc Schoenauer, Michèle Sebag
The socalled Auto$\u2606$ task, concerned with selecting a (quasi) optimal algorithm and its hyperparameters depending on the problem instance at hand, remained a key issue in ML for the last three decades 82, as well as in optimization at large 122, including combinatorial optimization and constraint satisfaction 129, 115 and continuous optimization 78. This issue, tackled by several European projects along the decades, governs the knowledge transfer to industry, due to the shortage of data scientists. It becomes even more crucial as models are more complex and their training requires more computational resources. This has motivated several international challenges devoted to AutoML 121 (see also Section 3.4), including the AutoDL challenge series 138 launched in 20195 (see also Section 8.6).
Several approaches have been used to tackle Auto$\u2606$ in the literature, and TAU has been particularly active in several of them. Metalearning aims to build a surrogate performance model, estimating the performance of an algorithm configuration on any problem instance characterized from its metafeature values 160, 115, 77, 78, 114. Collaborative filtering, considering that a problem instance "likes better" an algorithm configuration yielding a better performance, learns to recommend good algorithms to problem instances 168, 143. Bayesian optimization proceeds by alternatively building a surrogate model of algorithm performances on the problem instance at hand, and tackling it 107. This last approach currently is the prominent one; as shown in 143, the metafeatures developed for AutoML are hardly relevant, hampering both metalearning and collaborative filtering. The design of better features is another longterm research direction, in which TAU has recently been 98, ans still is very active. more recent approach used in TAU 157 extends the Bayesian Optimization approach with a MultiArmed Bandit algorithm to generate the full Machine Learning pipeline, competing with the famed AutoSKLearn 107 (see Section 8.2.1).
3.3.2 Information theory: adjusting model complexity and data fitting
Participants: Guillaume Charpiat, Marc Schoenauer, Michèle Sebag
In the 60s, Kolmogorov and Solomonoff provided a wellgrounded theory for building (probabilistic) models best explaining the available data 161, 117, that is, the shortest programs able to generate these data. Such programs can then be used to generate further data or to answer specific questions (interpreted as missing values in the data). Deep learning, from this viewpoint, efficiently explores a space of computation graphs, described from its hyperparameters (network structure) and parameters (weights). Network training amounts to optimizing these parameters, namely, navigating the space of computational graphs to find a network, as simple as possible, that explain the past observations well.
This vision is at the core of variational autoencoders 128, directly optimizing a bound on the Kolmogorov complexity of the dataset. More generally variational methods provide quantitative criteria to identify superfluous elements (edges, units) in a neural network, that can potentially be used for structural optimization of the network (Leonard Blier's PhD, started Oct. 2018).
The same principles apply to unsupervised learning, aimed to find the maximum amount of structure hidden in the data, quantified using this informationtheoretic criterion.
The known invariances in the data can be exploited to guide the model design (e.g. as translation invariance leads to convolutional structures, or LSTM is shown to enforce the invariance to time affine transformations of the data sequence 172). Scattering transforms exploit similar principles 87. A general theory of how to detect unknown invariances in the data, however, is currently lacking.
The view of information theory and Kolmogorov complexity suggests that key program operations (composition, recursivity, use of predefined routines) should intervene when searching for a good computation graph. One possible framework for exploring the space of computation graphs with such operations is that of Genetic Programming. It is interesting to see that evolutionary computation appeared in the last two years among the best candidates to explore the space of deep learning structures 159, 135. Other approaches might proceed by combining simple models into more powerful ones, e.g. using “Context Tree Weighting” 177 or switch distributions 101. Another option is to formulate neural architecture design as a reinforcement learning problem 79; the value of the building blocks (predefined routines) might be defined using e.g., MonteCarlo Tree Search. A key difficulty is the computational cost of retraining neural nets from scratch upon modifying their architecture; an option might be to use neutral initializations to support warmrestart.
3.3.3 Analyzing and Learning Complex Systems
Participants: Cyril Furtlehner, Aurélien Decelle, François Landes, Michèle Sebag
Methods and criteria from statistical physics have been widely used in ML. In early days, the capacity of Hopfield networks (associative memories defined by the attractors of an energy function) was investigated by using the replica formalism 72. Restricted Boltzmann machines likewise define a generative model built upon an energy function trained from the data. Along the same lines, Variational AutoEncoders can be interpreted as systems relating the free energy of the distribution, the information about the data and the entropy (the degree of ignorance about the microstates of the system) 176. A key promise of the statistical physics perspective and the Bayesian view of deep learning is to harness the tremendous growth of the model size (billions of weights in recent machine translation netwowrks), and make them sustainable through e.g. posterior dropout 147, weight quantization and probabilistic binary networks 142. Such "informational cooling" of a trained deep network can reduce its size by several orders of magnitude while preserving its performance.
Statistical physics is among the key expertises of Tau, originally only represented by Cyril Furtlehner, later strenghtened by Aurélien Decelle's and François Landes' arrivals in 2014 and 2018. Ongoing studies are conducted along several directions.
Generative models are most often expressed in terms of a Gibbs distributions $P\left[S\right]=exp(E[S\left]\right)$, where energy $E$ involves a sum of building blocks, modelling the interactions among variables. This formalization makes it natural to use meanfield methods of statistical physics and associated inference algorithms to both train and exploit such models. The difficulty is to find a good tradeoff between the richness of the structure and the efficiency of meanfield approaches. One direction of research pursued in TAU, 108 in the context of traffic forecasting, is to account for the presence of cycles in the interaction graph, to adapt inference algorithms to such graphs with cycles, while constraining graphs to remain compatible with meanfield inference.
Another direction, explored in TAO/TAU in the recent years, is based on the definition and exploitation of selfconsistency properties, enforcing principled divideandconquer resolutions. In the particular case of the messagepassing Affinity Propagation algorithm for instance 180, selfconsistency imposes the invariance of the solution when handled at different scales, thus enabling to characterize the critical value of the penalty and other hyperparameters in closed form (in the case of simple data distributions) or empirically otherwise 109.
A more recent research direction examines the quantity of information in a (deep) neural net along the random matrix theory framework 90. It is addressed in Giancarlo Fissore's PhD, and is detailed in Section 8.2.3.
Finally, we note the recent surge in using ML to address fundamental physics problems: from turbulence to highenergy physics and soft matter (with amorphous materials at its core) 132 or atrophysics/cosmology as well. TAU's dual expertise in Deep Networks and in statistical physics places it in an ideal position to significantly contribute to this domain and shape the methods that will be used by the physics community in the future. In that direction, the PhD thesis of Marion Ullmo and Tony Bonnaire applying statistical method coming either from deep learning or statistical physics to the task of inferring the structure of the cosmic web has show great succes with recents results discussed in Section 8.2.3. François Landes' recent arrival in the team makes TAU a unique place for such interdisciplinary research, thanks to his collaborators from the Simons Collaboration Cracking the Glass Problem (gathering 13 statistical physics teams at the international level). This project is detailed in Section 8.2.3.
Independently, François Landes is actively collaborating with statistical physicists (Alberto Rosso, LPTMS, Univ. ParisSaclay) and physcists at the frontier with geophysics (Eugenio Lippiello, Second Univ. of Naples) 136, 154. A CNRS grant (80Prime) finances a shared PhD (Vincenzo Schimmenti), at the frontier between seismicity and ML (Alberto Rosso, Marc Schoenauer and François Landes).
3.4 Organisation of Challenges
Participants: Cécile Germain, Isabelle Guyon, Marc Schoenauer, Michèle Sebag
Challenges have been an important drive for Machine Learning research for many years, and TAO members have played important roles in the organization of many such challenges: Michèle Sebag was head of the challenge programme in the Pascal European Network of Excellence (20052013); Isabelle Guyon, as mentioned, was the PI of many challenges ranging from causation challenges 119, to AutoML 120. The Higgs challenge 71, most attended ever Kaggle challenge, was jointly organized by TAO (C. Germain), LALIN2P3 (D. Rousseau and B. Kegl) and I. Guyon (not yet at TAO), in collaboration with CERN and Imperial College.
TAU was also particularly implicated with the ChaLearn Looking At People (LAP) challenge series in Computer Vision, in collaboration with the University of Barcelona 105 including the Job Candidate Screening Coopetition 102; the Real Versus Fake Expressed Emotion Challenge (ICCV 2017) 174; the Largescale Continuous Gesture Recognition Challenge (ICCV 2017) 174; the Largescale Isolated Gesture Recognition Challenge (ICCV 2017) 174.
Other challenges have been organized in 2020, or are planned for the near future, detailed in Section 8.6. In particular, many of them now run on the Codalab platform, managed by Isabelle Guyon and maintained at LISN.
4 Application domains
4.1 Computational Social Sciences
Participants: Philippe Caillou, Isabelle Guyon, Michèle Sebag, Paola Tubaro
Collaboration: JeanPierre Nadal (EHESS); Marco Cuturi, Bruno Crépon (ENSAE); Thierry Weil (Mines); JeanLuc Bazet (RITM)
Computational Social Sciences (CSS) studies social and economic phenomena, ranging from technological innovation to politics, from media to social networks, from human resources to education, from inequalities to health. It combines perspectives from different scientific disciplines, building upon the tradition of computer simulation and modeling of complex social systems 112 on the one hand, and data science on the other hand, fueled by the capacity to collect and analyze massive amounts of digital data.
The emerging field of CSS raises formidable challenges along three dimensions. Firstly, the definition of the research questions, the formulation of hypotheses and the validation of the results require a tight pluridisciplinary interaction and dialogue between researchers from different backgrounds. Secondly, the development of CSS is a touchstone for ethical AI. On the one hand, CSS gains ground in major, datarich private companies; on the other hand, public researchers around the world are engaging in an effort to use it for the benefit of society as a whole 133. The key technical difficulties related to data and model biases, and to selffulfilling prophecies have been discussed in section 3.1. Thirdly, CSS does not only regard scientists: it is essential that the civil society participate in the science of society 167.
Tao was involved in CSS for the last five years, and its activities have been strengthened thanks to P. Tubaro's and I. Guyon's expertises respectively in sociology and economics, and in causal modeling. Details are given in Section 8.3.
4.2 Energy Management
Participants: Isabelle Guyon, Marc Schoenauer, Michèle Sebag
Collaboration: Rémy Clément, Antoine Marot, Patrick Panciatici (RTE), Vincent Renault (Artelys)
Energy Management has been an application domain of choice for Tao since the end 2000s, with main partners SME Artelys (METIS Ilab INRIA; ADEME project POST; ongoing ADEME project NEXT), RTE (See.4C European challenge; two CIFRE PhDs), and, since Oct. 2019, IFPEN. The goals concern i) optimal planning over several spatiotemporal scales, from investments on continental Europe/North Africa grid at the decade scale (POST), to daily planning of local or regional power networks (NEXT); ii) monitoring and control of the French grid enforcing the prevention of power breaks (RTE); iii) improvement of housemade numerical methods using dataintense learning in all aspects of IFPEN activities (as described in Section 3.2).
The daily maintainance of power grids requires the building of approximate predictive models on the top of any given network topology. Deep Networks are natural candidates for such modelling, considering the size of the French grid ($\sim $ 10000 nodes), but the representation of the topology is a challenge when, e.g. the RTE goal is to quickly ensure the "n1" security constraint (the network should remain safe even if any of the 10000 nodes fails). Existing simulators are too slow to be used in real time, and the size of actual grids makes it intractable to train surrogate models for all possible (n1) topologies (see Section 8.4 for more details).
Furthermore, predictive models of local grids are based on the estimated consumption of endcustomers: Linky meters only provide coarse grain information due to privacy issues, and very few samples of finegrained consumption are available (from volunteer customers). A first task is to transfer knowledge from small data to the whole domain of application. A second task is to directly predict the peaks of consumption based on the user cluster profiles and their representativity (see Section 8.4.2).
4.3 Datadriven Numerical Modeling
Participants: Michele Alessandro Bucci, Guillaume Charpiat, Cécile Germain, Isabelle Guyon, Flora Jay, Marc Schoenauer, Michèle Sebag
As said (section 3.2), in domains where both first principlebased models and equations, and empirical or simulated data are available, their combined usage can support more accurate modelling and prediction, and when appropriate, optimization, control and design. This section describes such applications, with the goal of improving the timetodesign chain through fast interactions between the simulation, optimization, control and design stages. The expected advances regard: i) the quality of the models or simulators (through data assimilation, e.g. coupling first principles and data, or repairing/extending closedform models); ii) the exploitation of data derived from different distributions and/or related phenomenons; and, most interestingly, iii) the task of optimal design and the assessment of the resulting designs.
The proposed approaches are based on generative and adversarial modelling 128, 116, extending both the generator and the discriminator modules to take advantage of the domain knowledge.
A first challenge regards the design of the model space, and the architecture used to enforce the known domain properties (symmetries, invariance operators, temporal structures). When appropriate, data from different distributions (e.g. simulated vs realworld data) will be reconciled, for instance taking inspiration from realvalued nonvolume preserving transformations 95 in order to preserve the natural interpretation.
Another challenge regards the validation of the models and solutions of the optimal design problems. The more flexible the models, the more intensive the validation must be, as reminded by Leon Bottou. Along this way, generative models will be used to support the design of "what if" scenarios, to enhance anomaly detection and monitoring via refined likelihood criteria.
In the application case of dynamical systems such as fluid mechanics, the goal of incorporating machine learning into classical simulators is to speed up the simulations. Many possible tracks are possible for this; for instance one can search to provide better initialization heuristics to solvers (which make sure that physical constraints are satisfied, and which are responsible of most of the computational complexity of simulations) at each time step; one can also aim at predicting directly the state at $t+100$, for instance, or at learning a representation space where the dynamics are linear (Koopman  von Neumann). The topic is very active in the deep learning community. To guarantee the quality of the predictions, concepts such as Lyapunov coefficients (which express the speed at which simulated trajectories diverge from the true ones) can provide a suitable theoretical framework.
5 Social and environmental responsibility
5.1 Footprint of research activities
Thanks to the pandemia, the impact of our activities regarding carbon footprint have decreased a lot, from our daily commute that have almost completely disappeared as we all switched to teleworking to the transformation of all conferences and workshops into virtual events. We all miss the informal discussions that took place during coffee breaks in the lab as well as during conferences. But when the pandemia vanishes, after the first moments of joy when actually meeting again physically with our colleagues, we will have to think of a new model for the way we work: we were indeed discussing before the pandemia about how to reduce the carbon footpring of the conferences, but now we know that there exist solutions, even though not perfect.
5.2 Impact of research results
All our work on Energy (see Sections 4.2) is ultimately targeted toward optimizing the distribution of electricity, be it in planning the investments in the power network by more accurate previsions of user consumption, or helping the operators of RTE to maintain the French Grid in optimal conditions.
At the outbreak of the covid pandemic in Europe, François Landes got involved in the ICUBAM projet, which aimed at easing the practitionners' job, by providing them with realtime (ICU) beds availability in nearby hospitals. The data was fed by doctors themselves, and they could in return easily picture the ongoing (un)availability of beds in participating hospitals, thus facilitating the task of patient transfer 124.
6 Highlights of the year
6.1 Prestigious publications

Spotlight paper at ICLR (top 5% submissions) 38
Herilalaina Rakotoarison, Louisot Milijaona, Andry Rasoanaivo, Michèle Sebag, Marc Schoenauer
Learning Metafeatures for AutoML.
International Conference on Learning Representations , 2022 (already visible on OpenReview
6.2 Selective Fundings
TAU secured the following funded research projects (see Section 10 for more details):
 Bilateral collaboration with Fujitsu, "Causal inference in high dimension", Marc Schoenauer and Michèle Sebag coordinators.
 ANR project RoDAPoG, "Robust Deep learning for Artificial genomics and Population Genetics", Flora Jay, coordinator.
 ANR project SPEED "Simulating Physical PDEs Efficiently with Deep Learning", Lionel Mathelin (LIMSI) coordinator.
 Inria Challenge OceanAI, "AI, Data, Models for a Blue Economy", Nayat Sanchez Pi (Inria Chile) coordinator.
7 New software and platforms
7.1 New software
7.1.1 Codalab

Keywords:
Benchmarking, Competition

Functional Description:
Challenges in machine learning and data science are competitions running over several weeks or months to resolve problems using provided datasets or simulated environments. Challenges can be thought of as crowdsourcing, benchmarking, and communication tools. They have been used for decades to test and compare competing solutions in machine learning in a fair and controlled way, to eliminate “inventorevaluator" bias, and to stimulate the scientific community while promoting reproducible science. See our news: https://codalab.lisn.upsaclay.fr/highlights.
The new Codalab infrastructure deployed in 2021 includes vast amounts of storage over a distributed Minio (4 physical servers, each with 12 disks of 16 TB) spread over 2 buildings for robustness, and 20 GPU workers in the backend, thanks for the sponsorship of région IledeFrance, ANR, Université ParisSaclay, CNRS, INRIA, and ChaLearn, to support 50,000 users, organizing or participating each year to hundreds of competitions.
Some of the areas in which Codalab is used include Computer vision and medical image analysis, natural language processing, time series prediction, causality, and automatic machine learning. Codalab has been selected by the Région Ile de France to organize industryscale challenges.
TAU continues expanding Codalab to accommodate new needs, including teaching. Check recent student projects: https://saclay.chalearn.org/

News of the Year:
L2RPN The Learning to Run a Power Network competition track in collaboration with RTE France continues. The ICAPS 2021 competition allowed us to go one step further towards making the grid control with reinforcement learning more realistic, allowing adversarial attacks. A new opensource framework Grid2Operate was released.
AutoDL The Automated Deep Learning (AutoDL) challenge series evolved in the direction of meta learning (https://metalearning.chalearn.org/). We organized a competition fr NwurIPS 2021 sponsored by Google and Microsoft. The results, which will appear in PMLR, indicate that few shot learning (5 shots, 5 classes) is now within reach of the state of the art for small image object recognition, but heavily relies on pretrained backbone networks, trained on large image datasets.
Industry challenges The first Ile de France industry challenge was organized on Codalab, in collaboration with Dassault aviation and the results were presented at ICMLA 2021. The goal was to predict sensor data indicating constrains on the fuselage. Surprisingly conventional methods based on ensembles of decision trees dominated this task and outperformed deep learnign methods.
World use of the platform In 2021, on average, 50 competitions per month were organized on Codalab by researchers from all over the world. Codalab is also used in education to organize code submission homework.
Codabench December 2021: Codabench (beta) is announced at NeurIPS 2021, see https://www.codabench.org/.
 URL:

Contact:
Isabelle Guyon
7.1.2 Cartolabe

Name:
Cartolabe

Keyword:
Information visualization

Functional Description:
The goal of Cartolabe is to build a visual map representing the scientific activity of an institution/university/domain from published articles and reports. Using the HAL Database, Cartolabe provides the user with a map of the thematics, authors and articles . ML techniques are used for dimensionality reduction, cluster and topics identification, visualisation techniques are used for a scalable 2D representation of the results.
Cartolabe has in particular been applied to the Grand Debat dataset (3M individual propositions from french Citizen, see https://cartolabe.fr/map/debat). The results were used to test both the scaling capabilities of Cartolabe and its flexibility to nonscientific and nonenglish corpuses. We also Added submap capabilities to display the result of a year/lab/word filtering as an online generated heatmap with only the filtered points to facilitate the exploration. Cartolabe has also been applied in 2020 to the COVID19 kaggle publication dataset (CartolabeCOVID project) to explore these publications.
 URL:
 Publication:

Contact:
Philippe Caillou

Participants:
Philippe Caillou, Jean Daniel Fekete, Michèle Sebag, AnneCatherine Letournel

Partners:
LRI  Laboratoire de Recherche en Informatique, CNRS
7.2 New platforms
Participants: Guillaume Charpiat, Isabelle Guyon, FLora Jay, AnneCatherine Letournel, Adrien Pavao, Théophile Sanchez, Tran Tuan
 CODALAB: In 2021, Codalab's growth to more than 50 competitions per month has required us to upgrade the software infrastructure, add new servers, and storage space. The new Codalab infrastructure in now stable. We have migrated the storage over a distributed Minio (4 physical servers, each with 12 disks of 16 TB) spread over 2 buildings for robustness, and added 10 more GPUs to the existing 10 previous ones in the backend. A lot of horsepower to support Industrystrength challenge. This was made possible with the sponsorship of région IledeFrance, ANR, Université ParisSaclay, CNRS, INRIA, and ChaLearn.
 In 2021, we also rolled out a new version of Codalab in Python 3 will upgraded libraries and better admin features.
 DNADNA: Deep Neural Architectures for DNA. We are releasing an opensource software platform dedicated to deep learning for population genetics 66.
8 New results
8.1 Toward Good AI
8.1.1 Causal Modeling
Participants: Philippe Caillou, Isabelle Guyon, Michèle Sebag
PhDs: Armand Lacombe
Postdoc: Ksenia Gasnikova, Saumya Jetley
Collaboration: Olivier Allais (INRAE); JeanPierre Nadal & Annick Vignes (CAMS, EHESS); David LopezPaz (Facebook).
The causal modelling activity has been continued in 2020 along two directions. The first one concerns the impact of nutrition on health. This study started in the context of the Initiative de Recherche Stratégique Nutriperso (20162018), headed by LouisGeorge Soler, INRAE, based on the wealth of data provided by the Kantar panel (170,000 bought products by 10,000 households over Year 2014). The challenges are manifold. Firstly, the number of potential causes is in the thousands, thus larger by an order of magnitude compared to most causal modelling studies. Secondly, a "same" product (e.g. "pizza") has vastly different impacts on health, depending on its composition and (hyper)processing. Lastly, the data is ridden with hidden confounders (e.g. with no information about smoking or sport habits).
On the one hand, the famed Deconfounder approach175, 91, 149, 125 has been investigated and extended to account for the known presence of hidden confounders, as follows. A probabilistic model of the nutritional products based on Latent Dirichlet Allocation has been used, the factors of which are used as substitute confounders (SC) to block the effects of the confounders. On the other hand, the innovative notion of "microinterventions" has been defined, operating on the basket of products associated to a household to e.g. replace the products with organic products; or increase the amount of alcohol ingested. The average treatment effect of the microinterventions has been assessed conditionally to each SC, after correction for the biases related to the socioeconomic description of the households 33.
Finally, causality is also at the core of TAU participation in the INRIA Challenge OceanIA, that started in 2021 47, and will analyze the ocean data fetched by the Tara expedition. Other motivating applications for causal modeling are described in section 4.1.
8.1.2 Explainability
Participants: Isabelle Guyon, François Landes, Alessandro Leite, Marc Schoenauer, Michèle Sebag
PhD: Roman Bresson
Collaboration: MyDataModels; Thalès
Causal modeling is one particular method to tackle explainability, and TAU has been involved in other initiatives toward explainable AI systems. Following the LAP (Looking At People) challenges, Isabelle Guyon and coorganizers have edited a book 158 that presents a snapshot of explainable and interpretable models in the context of computer vision and machine learning. Along the same line, they propose an introduction and a complete survey of the stateoftheart of the explainability and interpretability mechanisms in the context of first impressions analysis 103. Other directions in this line of research include explaining missing data, with applications in computer vision 106.
Another direction is investigated in Roman Bresson's PhD, cosupervised with Johanne Cohen (LISNGALAC), Christophe Labreuche (Thalès) and Eyke Hullermeier (U. Paderborn). The transcription of hierarchical Choquet models (HCI) into a neural architecture enforcing by design the HCI constraints of monotonicity and additivity has been proposed, supporting the endtoend learning of the HCI with a known hierarchy 86. A patent (BressonLabreucheSebagCohen) has been filed by Thalès. The approach has been extended to achieve the automatic identification of the hierarchy as well; the unicity of the structure under canonic assumptions is being established 29.
The team is also involved in the proposal for the IPL HyAIAI (Hybrid Approaches for Interpretable AI), coordinated by the LACODAM team (Rennes) dedicated to the design of hybrid approaches that combine state of the art numeric models (e.g., deep neural networks) with explainable symbolic models, in order to be able to integrate high level (domain) constraints in ML models, to give model designers information on illperforming parts of the model, to provide understandable explanations on its results. Ongoing collaboration with the Multispeech team in Nancy is concerned with the use of background knowledge to improve the performances of foundational models in NLP 40.
A completely original approach to DNN explainability might arise from the study of structural glasses (8.2.3), with a parallel to Graph Neural Networks (GNNs), that could become an excellent nontrivial example for developing explainability protocols.
Genetic Programming 76 is an Evolutionary Computing technique that evolves models as analytical expressions (Boolean formulae, functions, LISPlike code), that are hopefully easier to understand than blackbox NNs with hundreds of thousands of weights. This idea has been picked up by the European FET project TRUSTAI (Transparent, Reliable and Unbiased Smart Tool for AI) that started in October 2020. Alessandro Leite joined the project (and the TAU team) in February 2021 on an ARP position. He supervised Mathurin Videau's Master thesis dealing with explainable reinforcement learning using GP 70. In the mean time, Marc Schoenauer is working together with the startup company MyDataModels whose lighthouse product is based on an original variant of Genetic Programming 28. Both approach are promising juststarted or ongoing works. Another marginal work on the Evolutionary Computation side: the revival of the Evolving Objects platform 31.
8.1.3 Robustness of AI Systems
Participants: Guillaume Charpiat, Marc Schoenauer, Michèle Sebag
PhDs: Julien Girard, Roman Bresson
Collaboration: Zakaria Chihani (CEA); Johanne Cohen (LISNGALAC) and Christophe Labreuche (Thalès); Eyke Hullermeier (U. Paderborn, Germany).
As said (Section 3.1.2), Tau is considering two directions of research related to the certification of MLs.
The first direction considers the formal validation of Neural Networks. The topic of provable deep neural network robustness has raised considerable interest in recent years. Most research in the literature has focused on adversarial robustness, which studies the robustness of perceptive models in the neighbourhood of particular samples. However, other works have proved global properties of smaller neural networks. Yet, formally verifying perception remains uncharted. This is due notably to the lack of relevant properties to verify, as the distribution of possible inputs cannot be formally specified. With Julien GirardSatabin's PhD thesis, which was defended this year, we had proposed to take advantage of the simulators often used either to train machine learning models or to check them with statistical tests, a growing trend in industry. Our formulation 113 allowed us to formally express and verify safety properties on perception units, covering all cases that could ever be generated by the simulator, to the difference of statistical tests which cover only seen examples.
To go further and alleviate the computational complexity of formally validating a neural network (naive complexity: exponential in the number of neurons), we explore different strategies to apply solvers to subproblems that are much simpler. We rely on the fact that ReLU networks (the most common type of modern networks) are actually piecewiselinear, yielding extremely simple problems on each piece 41, 65. All results obtained are presented in detail in Julien's PhD 53.
The second direction, already mentioned in the section devoted to explainability, concerns the indentifiability of the neural net implementing a hierarchical Choquet integral, in the large sample limit.
Another direction, more remotely related to the robustness of AI systems, is concerned with privacy. Our primary motivation was to contribute to the understanding of the pandemy, with no former collaboration with hospitals, and therefore, no access to real data. An approach was developed to achieve excessively private learning through a differentialprivacy compliant access to the only marginals of the data 42.
8.2 Learning to Learn
8.2.1 Auto*
Participants: Guillaume Charpiat, Isabelle Guyon, Marc Schoenauer, Michèle Sebag
PhDs: Léonard Blier, Guillaume Doquet, Zhengying Liu, Adrien Pavao, Herilalaina Rakotoarison, Hoazhe Sun, Manon Verbockhaven, Romain Egele
Collaborations: Vincent Renault (SME Artelys); Yann Ollivier (Facebook); WeiWei Tu (4Paradigm, Chine); André Elisseeff (Google Zurich); Prasanna Balaprakash (Argonne National labs),among others (for a full list see https://autodl.chalearn.org/ and https://metalearning.chalearn.org/)
Auto$\u2606$ studies at Tau investigate several research directions.
After proposing MOSAIC 157, that extends and adapts MonteCarlo Tree Search to explore the structured space of preprocessing + learning algorithm configurations, and performs on par with AutoSklearn, the winner of Auto$\u2606$ international competitions in the last few years, Herilalaina Rakotoarison explored in his PhD an original approach in cooperation with Gwendoline de Bie and Gabriel Peyre (ENS). The neural learning from distributions proposed by Gwendoline 83 has been extended to achieve equivariant learning. Formally, the proposed DIDA architecture (Distributionbased Invariant Deep Architecture) learns from set of samples, regardless of the order of the samples and of their descriptive features. Two original tasks have been proposed to train a DIDA 62: detecting whether two set of samples (with different descriptive features) are extracted from the same overall dataset; ranking two hyperparameter configurations of a given classification algorithm) w.r.t. their predictive accuracy on the sample set. On both tasks, DIDA significantly outperforms the state of the art. Most interestingly, the main limitation incurred on the latter task (which constitutes a prototask of AutoML) is the lack of sufficient data. Some augmentation process based on OpenML 173 was required to solve this latter task.
Followup work within Héri's PHD is concerned with learning metafeatures for tabular data to address the lack of expressiveness of the standard HandCrafted ones. The idea is to use Optimal Transport to align the distribution of the datasets from the training metadata with that of their best hyperparameter settings in the space of hyperparameter conigurations. The results will be presented as a spotlight (top 5% submissions) at ICLR 2022 38.
Heri also contributed to a large benchmarking effort together with Olivier Teytaud, former member of the team now with Facebook AI Research 20.
In a second direction, with the internship and starting PhD thesis of Manon Verbockhaven, we adopt a functional analysis viewpoint in order to adapt on the fly the architecture of neural networks that are being trained. This allows to start training neural networks with very few neurons and layers, and add them where they are needed, instead of training huge architectures and then pruning them, a common practice in deep learning, for optimization reasons. For this, we quantify the lack of expressivity of a neural network being trained, by analyzing the difference between how the backpropagation would like the activations to change and what the tangent space of the parameters offers as possible activation variations. We can then localize the lacks of expressivity, and add neurons accordingly. It turns out that the optimal weights of the added neurons can be computed in closed form.
A last direction of investigation concerns the design of challenges, that contribute to the collective advance of research in the Auto$\u2606$ direction. The team has been very active in the series of AutoML 169104 and AutoDL 19, which has been extended to MetaLearning, with support from Microsoft, Google, 4Paradigm and ChaLearn. An account of the AutoDL challenge series was published following a the NeurIPS 2020 competition track 141. Postchallenge analyses were conducted on the JeanZay super computer and the results have been published in TPAMI paper 19. The results of the first edition of the fewshot learning competition, accepted in conjunction with a workshop on metalearning at the AAAI 2021 conference (with the sponsorship of Microsoft and Google who provided cloud credits), were published in PMLR 17. A scaled up version was then accepted to the competition program of NeurIPS 2021, and the analysis is under way. The current main takeaway is the importance of learning good feature representations. Sefsupervised learning seems to be an avenue with great future, allowing to train reprsentations without costly human labeling. A new challenge accepted as part of the WCCI competition program 2022 in currently running. Another challenge on Neural Architecture Search (NAS) has been run together with a workshop at the CVPR 2021 conference. Preliminary results on NAS have been produced by one of our interns (Romain Egele 32). Further developments have led to effec tive algorithms to conduct simultaneously NAS and hyperparameter selection 64. More details on challenges are found in Section 8.6).
8.2.2 Deep Learning: Practical and Theoretical Insights
Participants: Guillaume Charpiat, Isabelle Guyon, Marc Schoenauer, Michèle Sebag
PhDs: Léonard Blier, Zhengying Liu, Adrien Pavao, Haozhe Sun, Romain Egele
Collaboration: Yann Ollivier (Facebook AI Research, Paris)
Although a comprehensive mathematical theory of deep learning is yet to come, theoretical insights from information theory or from dynamical systems can deliver principled improvements to deep learning and/or explain the empirical successes of some architectures compared to others.
During his CIFRE PhD with Facebook AI Research Paris, cosupervised by Yann Ollivier (former TAU member), Léonard Blier has properly formalized the concepts of successor states and multigoal functions58, in particular in the case of continuous state spaces. This allowed him to define unbiased algorithms with finite variance to learn such ojects, including the continuous case thanks to approximation functions. In the case of finite environments, new convergence bounds have been obtained for the learning of the value function. These new algorithms capable of learning successor states in turn lead to define and learn new representations for the state space.
The AutoDL challenges, coorganized in TAU (in particular by Isabelle Guyon, and by Zhengying Liu within his PhD), also contribute to a better understanding of Deep Learning. It is interesting to note that no Neural Architecture Search algorithm was proposed to solve the different challenges in AutoDL (corresponding to different data types). See section 8.6 for more details.
The metalearning setting, which was devised for the Auto* challenges was analyzed theroretically by Zhengying Liu 139. Assuming the perfect knowledge of the metadistribution (i.e., in the limit of a very large number of training tasks), the paper investigates under which conditions algorithm recommendation can benefit from metalearning, and thus, in some sense,“defeat” the NoFreeLunch theorem. Four metapredict strategies are analyzed: Random, Mean, Greedy and Optimal. Conditions of optimality are investigated and experiments conducted on artificial and real data. All results are detailed in Zhengying's PhD 54. Some of the directions outlined in this thesis have been pursued by our intern Hung Manh Nguyen, in his work on applying Reinforcement Learning to metalearing from learning curves 34. He demostrated that methods such as DDQN can learn policies that choose best suited algorithms to a given task, in the process of training, without having to wait for DL methods to converge, a big timesaving achievement. Such methods outperform all baselines, including Bayesian Optimization (currently the state of the art).
Our new PhD student Haozhe Sun has begun working on the problem of modularity in Deep Learning. The current trend in Artificial Intelligence (AI) is to heavily rely on systems capable of learning from examples, such as Deep learning (DL) models, a modern embodiment of artificial neural networks. While numerous applications have made it to market in recent years (including selfdriving cars, automated assistants, booking services, and chatbots, improvements in search engines, recommendations, and advertising, and heathcare applications, to name a few) DL models are still notoriously hard to deploy in new applications. In particular, the require massive numbers of training examples, hours of GPU training, and highly qualified engineers to handtune their architectures. This thesis will contribute to reduce the barrier of entry in using DL models for new applications, a step towards "democratizing AI".
The angle taken will be to develop new Transfer Learning (TL) approaches, based on modular DL architectures. Transfer learning encompasses all techniques to speed up learning by capitalizing on exposure to previous similar tasks. For instance, using pretrained networks is a key TL tactic used by winners of the recent AutoDL challenge. The doctoral candidate will push forward the notion of reusability of pretrained networks in whole or in part (modularity). Thus far the student has developed a benchmarking environment called OmniPrint to general problems in TL 50, which lends itself to exploring combinatorial optimization problems.
Our new PhD student Romain Egele has been working in collaboration with Argonne National Labs (USA) has been actively working on Neural Architecture Search (NAS). He developed a package called DeepHyper, allowing users to conduct NAS with genetic algorithms using TensorFlow or PyTorch, the principal Deep Learning frameworks 32. His contributions include applying Recurrent Neural Network Architecture Search for Geophysical Emulation and Scalable ReinforcementLearningBased Neural Architecture Search for Cancer Deep Learning Research.
8.2.3 Analyzing and Learning Complex Systems
Participants: Cyril Furtlehner, Aurélien Decelle, François Landes
PhDs: Giancarlo Fissore, Tony Bonnaire, Marion Ullmo
Collaboration: Jacopo Rocchi (LPTMS Paris Sud); the Simons team: Rahul Chako (postdoc), Andrea Liu (UPenn), David Reichman (Columbia), Giulio Biroli (ENS), Olivier Dauchot (ESPCI).; Clément Vignax (EPFL); Yufei Han (Symantec), Nabila Aghanim.
Generative models constitute an important piece of unsupervised ML techniques which is still under rapid developpment. In this context insights from statistical physics are relevent in particular for energy based models like restricted Boltzmann machines. The information content of a trained restricted Boltzmann machine (RBM) and its learning dynamics can be analyzed precisely with help of ensemble averaging techniques 93, 94. More insight can be obtained by looking at data of low intrinsic dimension, where exact solutions of the RBM can be obtained 16 thanks to a convex relaxation, along with a Coulomb interpretation of the model, allowing to detect important shortcommings of standard training procedures and their possible resolution in views of concrete applications. In particular we have found a 1st order transition mechanisms that may plague the learning in a more advanced part of the learning. To overcome this problem we have identified two possible solutions. One is based on a theoretical observation relating the learning process to a regularized linear regression, after considering a convex relaxation of the model 16. The other way 30 is to take advantage of outofequilibrium phenomena occuring when training the RBM with Monte Carlo chains that do not converge toward the equilibrium distribution. In this setting, it is possible to set up a precise dynamical process that will be learned and which does not need very long equilibration time. When the RBM is trained in that way, by taking the same dynamics used for the learning regime, when generating new data, we can avoid the problem raised by the first order transition. From the practical point of view, we have proposed a monitoring procedure involving a set of metrics 30 to insure a correct and efficient learning. While it is known that training of RBMs is difficult, our recent findings should help us ensuring to perform this task correctly.
Beside this,a long term project on traffic prediction based on different meanfield methods, sparse inverse covariances and belief propagation has been wrapped up in 18 with extensive experiments on real data.
As mentioned earlier, the use of ML to address fundamental physics problems is quickly growing. In that direction two different directions have been taken. On one hand, the PhD thesis of M. Ullmo and T. Bonnaire is focusing on dealing with the charactirization of the cosmic web (the baryionic structure taking place at large scale in our universe) in order to track the socalled missing baryons of the standard theory. M. Ullmo demonstrated the faisability of using Generative Adversarial Network (GAN) on the distributions dark matter at cosmological scale (up to hundreds of Mpc) both using data coming from 2D simulation and 3D simulations 26. In that setting, she also developed a novel building an encoder capable of inferring the latent structure of the GAN for a given image and showing that many details are recovered. T. bonnaire on his side worked on designing a new method in order to classifying the structure of the cosmic web into clusters and filaments, directly from the position of the dark matter galaxies. To do so, he developed a method based on the Gaussian mixture model with a prior forcing the centers to "live" on a treegraph: two centers sharing an edge on this graph benefit from an attractive attraction, forcing the algorithm to adapt the center's position taking into account both the density distribution and the shape of the prior 84, 59, 14. This method has been further developed for handling in particular possible outliers and put into a general formalism 15.
On the other hand, it leads to some methodological mistakes from newcomers, that have been investigated by Rémi Perrier (2 month internship). One example is the domain of glasses (how the structure of glasses is related to their dynamics), which is one of the major problems in modern theoretical physics 75. The idea is to let ML models automatically find the hidden structures (features) that control the flowing or nonflowing state of matter, discriminating liquid from solid states. These models can then help identifying "computational order parameters", that would advance the understanding of physical phenomena 132, 14, on the one hand, and support the development of more complex models, on the other hand. More generally, attacking the problem of amorphous condensed matter by novel Graph Neural Networks (GNN) architectures is a very promising lead, regardless of the precise quantity one may want to predict. Currently GNNs are engineered to deal with molecular systems and/or crystals, but not to deal with amorphous matter. This second axis is currently being attacked in collaboration with Clément Vignac (PhD Student at EPFL), using GNNs, and more recently with a promising M2 internship (Francesco Pezzicoli). Furthermore, this problem is new to the ML community and it provides an original nontrivial example for engineering, testing and benchmarking explainability protocols.
Another direction of research related to learning to learn in complex systems has been investigated in collaboration with Omar Shrit (LISN, ROCS), in order to learn decentralized controllers for a swarm of quadcopters 39, 48. The principle consists in alternatively generating data using the Gazebo simulator, and labelling these data to learn a better controller via supervised learning. The approach is iterated. The originality lies in using the strength of communication signal to infer the distance among the quadcopters. The exploitation of the data via ML proves an efficient and robust way to handle the noise in the communication signal.
8.3 Computational Social Sciences
Computational Social Sciences (CSS) is making significant progress in the study of social and economic phenomena thank to the combination of social science theories and new insight from data science. While the simultaneous advent of massive data and unprecedented computational power has opened exciting new avenues, it has also raised new questions and challenges.
Several studies are being conducted in TAU, about labor (labor markets, the labor of human annotators for AI data, quality of life and economic performance), about nutrition (health, food, and sociodemographic issues), around Cartolabe, a platform for scientific information system and visual querying and around GAMA, a multiagent based simulation platform.
8.3.1 Labor Studies
Participants: Philippe Caillou, Isabelle Guyon, Michèle Sebag, Paola Tubaro
PhDs: Guillaume Bied, Armand Lacombe, Elinor Wahal, Assia Wirth
PostDocs: Saumya Jetley
Engineers: Raphael Jaiswal, Victor Alfonso Naya
Collaboration: JeanPierre Nadal (EHESS); Marco Cuturi, Bruno Crépon (ENSAE); Antonio Casilli, Ulrich Laitenberger (Telecom Paris); Odile Chagny (IRES); Francesca Musiani, Mélanie Dulong de Rosnay (CNRS); José Luis Molina (Universitat Autònoma de Barcelona); Antonio Ortega (Universitat de València); Julian Posada (University of Toronto)
A first area of activity of TAU in Computational Social Sciences is the study of labor, from the functioning of the job market, to the rise of new, atypical forms of work in the networked society of internet platforms, and the quality of life at work.
Job markets Two projects deal with the domain of job markets and machine learning. The DataIA project Vadore, in collaboration with ENSAE and Pôle Emploi, has two goals. First, to improve the recommendation of jobs for applicants (and the recommendation of applicants to job offers). The main originalities in this project are: i) to use both machine learning and optimal transport to improve the recommendation by learning a matching function for past hiring, and then to apply optimal transportlike bias to tackle market congestion (e.g. to avoid assigning many applicants to a same job offer); ii) to use randomized test on micromarkets (AB testing) in collaboration with Pôle Emploi to test the global impact of the algorithms. First results on past data have been published about congestion avoidance algorithms 43 and about the economic analysis of the recommandation results 46.
The JobAgile project, BPIPIA contract, coll. EHESS, Dataiku and Qapa, deals with low salary interim job recommendations. A main difference with the Vadore project relies on the high reactivity of the Qapa and Dataiku startups: i) to actually implement ABtesting; ii) to explore related functionalities, typically the recommendation of formations; iii) to propose a visual querying of the job market, using the Cartolabe framework (below).
The human labor behind AI
We look at data "microworkers" who perform essential, yet marginalized and poorly paid tasks such as labeling objects in a photograph, translating or transcribing short texts, or recording utterances. Microworkers are recruited through specialist intermediaries across supply chains that span the globe and reproduce inherited NorthSouth outsourcing relationships 23. Further observed inequalities are genderbased 22. Despite the opportunity to telework, the COVID19 pandemic has adversely affected these workers, widening the gap that separates them from the formally employed 21. Current work extends this research to look at the demand for these nonstandard forms of labor that emanate from companies, notably in France and Germany 56.
The possibility to use microwork for research purposes (for example, in online surveys and experiments) raises specific ethical issues 51 that add to the rising number of challenges in today's science 24 and requires adapted responses at all stages of research, from data collection to analysis and even dissemination of results 25.
8.3.2 Health, food, and sociodemographic relationships
Participants: Philippe Caillou, Michèle Sebag, Paola Tubaro
PhD: Armand Lacombe
Postdoc: Ksenia Gasnikova, Saumya Jetley
Collaboration: LouisGeorges Soler, Olivier Allais (INRA); JeanPierre Nadal, Annick Vignes (CAMS, EHESS)
Another area of activity concerns the relationships between eating practices, sociodemographic features and health, and its links with causal learning (see also Section 8.1.1), that has been continued in 2020.
The study about the impact of nutrition on health started in the context of the Initiative de Recherche Stratégique Nutriperso (20162018), headed by LouisGeorge Soler, INRAE, based on the wealth of data provided by the Kantar panel (170,000 bought products by 10,000 households over Year 2014). The challenges are manifold. Firstly, the number of potential causes is in the thousands, thus larger by an order of magnitude compared to most causal modelling studies. Secondly, a "same" product (e.g. "pizza") has vastly different impacts on health, depending on its composition and (hyper)processing. Lastly, the data is ridden with hidden confounders (e.g. with no information about smoking or sport habits).
On the one hand, the famed Deconfounder approach175, 91, 149, 125 has been investigated and extended to account for the known presence of hidden confounders, as follows. A probabilistic model of the nutritional products based on Latent Dirichlet Allocation has been used, the factors of which are used as substitute confounders (SC) to block the effects of the confounders. On the other hand, the innovative notion of "microinterventions" has been defined, operating on the basket of products associated to a household to e.g. replace the products with organic products; or increase the amount of alcohol ingested. The average treatment effect of the microinterventions has been assessed conditionally to each SC, after correction for the biases related to the socioeconomic description of the households. Submission in preparation.
8.3.3 Scientific Information System and Visual Querying
Participants: Philippe Caillou, Michèle Sebag
Engineers: AnneCatherine Letournel, Victor Alfonso Naya
Collaboration: JeanDaniel Fekete (AVIZ, Inria Saclay)
A third area of activity concerns the 2D visualisation and querying of a corpus of documents. Its initial motivation was related to scientific organisms, institutes or Universities, using their scientific production (set of articles, authors, title, abstract) as corpus. The Cartolabe project (see also Section 7) started as an Inria ADT (coll. Tao and AVIZ, 20152017). It received a grant from CNRS (coll. Tau, AVIZ and HCCLRI, 20182019).
The originality of the approach is to rely on the content of the documents (as opposed to, e.g. the graph of coauthoring and citations). This specificity allowed to extend Cartolabe to various corpora, such as Wikipedia, Bibliotheque Nationale de France, or the Software Heritage. Cartolabe was also applied in 2019 to the Grand Debat dataset: to support the interactive exploration of the 3 million propositions; and to check the consistency of the official results of the Grand Debat with the data. Cartolabe has also been applied in 2020 to the COVID19 kaggle publication dataset (CartolabeCOVID project) to explore these publications.
Among its intended functionalities are: the visual assessment of a domain and its structuration (who is expert in a scientific domain, how related are the domains); the coverage of an institute expertise relatively to the general expertise; the evolution of domains along time (identification of rising topics). A round of interviews with betauser scientists has been performed in 20192020. Cartolabe usage raises questions at the crossroad of humancentered computing, data visualization and machine learning: i) how to deal with stressed items (the 2D projection of the item similarities poorly reflects their similarities in the high dimensional document space; ii) how to customize the similarity and exploit the users' feedback about relevant neighborhoods. A statement of the current state of the project was published in 2021 12.
8.3.4 MultiAgent based simulation framework for social science
Participants: Philippe Caillou
Collaboration: Patrick Taillandier (INRA), Alexis Drogoul and Nicolas Marilleau (IRD), Arnaud Grignard (MediaLab, MIT), Benoit Gaudou (Université Toulouse 1)
Since 2008, P. Caillou contributes to the development of the GAMA platform, a multiagent based simulation framework. Its evolution is driven by the research projects using it, which makes it very well suited for social sciences studies and simulations.
The focus of the development team in 2020 was on the stability of the platform and on the documentation to provide a stable and well documented framework to the users.
8.4 Energy Management
8.4.1 Power Grids Management
Participants: Isabelle Guyon, Marc Schoenauer
PhDs: Balthazar Donon, Wenzhuo Liu
Collaboration: Rémi Clément, Patrick Panciatici (RTE)
Our collaboration with RTE, during Benjamin Donnot's (20162019) 96 and Balthazar Donon's CIFRE PhDs (to be defended in March 2022), is centered on the maintainance of the national French Power Grid. In order to maintain the socalled "(n1) safety" (see Section 4.2), fast simulations of the electrical flows on the grid are mandatory, that the homebrewed simulator HADES is too slow to provide. The main difficulty of using Deep Neural Networks surrogate models is that the topology of the grid (a graph) should be taken into account, and because all topologies cannot be included in the training set, this requires outofsample generalization capabilities of the learned models.
Balthazar Donon developped an approach based on Graph Neural Networks (GNNs). From a Power Grid perspective, GNNs can be viewed as including the topology in the heart of the structure of the neural network, and learning some generic transfer function amongst nodes that will perform well on any topology. His work 97 uses a loss that directly aims to minimize Kirshhoff's law on all lines. Theoretical results as well as a generalization of the approach to other optimization problems on graphs are at the heart of his PhD.
8.4.2 Optimization of Local Grids and the Modeling of Worstcase Scenarios
Participants: Isabelle Guyon, Marc Schoenauer, Michèle Sebag
PhDs: Victor Berger, Herilalaina Rakotoarison
Postdoc: Berna Batu
Collaboration: Vincent Renaut (Artelys), Gabriel Peyré and Gwendoline de Bie (ENS).
One of the goals of the ADEME Next project, in collaboration with SME Artelys (see also Section 4.2), is the sizing and capacity design of regional power grids. Though smaller than the national grid, regional and urban grids nevertheless raise scaling issues, in particular because many more finegrained information must be taken into account for their design and predictive growth.
Regarding the design of such grids, and provided accurate predictions of consumption are available (see below), offtheshelf graph optimization algorithms can be used. Berna Batu is gathering different approaches. Herilalaina Rakotoarison's PhD tackles the automatic tuning of their parameters (see Section 8.2.1); while the Mosaic algorithm is validated on standard AutoML benchmarks 157, its application to Artelys' home optimizer at large Knitro is ongoing, and compared to the stateoftheart in parameter tuning (confidential deliverable). More details to come in Heri's PhD to be defended in May 2022.
In order to get accurate consumption predictions, V. Berger's PhD tackles the identification of the peak of energy consumption, defined as the level of consumption that is reached during at least a given duration with a given probability, depending on consumers (profiles and contracts) and weather conditions. The peak identification problem is currently tackled using MonteCarlo simulations based on consumer profile and weatherdependent individual models, at a high computational cost. The challenge is to exploit individual models to train a generative model, aimed to sampling the collective consumption distribution in the quantiles with highest peak consumption. The concept of Compositional Variational AutoEncoder was proposed: it is amenable to multiensemblist operations (addition or subtraction of elements in the composition), enabled by the invariance and generality of the whole framework w.r.t. respectively, the order and number of the elements. It has been first tested on synthetic problems 81. The corresponding approach has been extended to study the tradeoff between the optimization of the reconstruction loss and the latent compression of VAEs, both theoretically and numerically, and to finetune generative models 57. All these results are detailed in Victor's PhD 52, defended in November 2021.
8.5 Datadriven Numerical Modelling
8.5.1 Space Weather Forecasting
Participants: Cyril Furtlehner, Michèle Sebag
Postdoc: Olivier Bui
Collaboration: Jannis Teunissen (CWI)
Space Weather is broadly defined as the study of the relationships between the variable conditions on the Sun and the space environment surrounding Earth. Aside from its scientific interest from the point of view of fundamental space physics phenomena, Space Weather plays an increasingly important role on our technologydependent society. In particular, it focuses on events that can affect the performance and reliability of spaceborne and groundbased technological systems, such as satellite and electric networks that can be damaged by an enhanced flux of energetic particles interacting with electronic circuits.6
Since 2016, in the context of the InriaCWI partnership, a collaboration between Tau and the Multiscale Dynamics Group of CWI aims to longterm Space Weather forecasting. The goal is to take advantage of the data produced everyday by satellites surveying the sun and the magnetosphere, and more particularly to relate solar images and the quantities (e.g., electron flux, proton flux, solar wind speed) measured on the L1 libration point between the Earth and the Sun (about 1,500,000 km and 1 hour time forward of Earth). A challenge is to formulate such goals in terms of supervised learning problem, while the "labels" associated to solar images are recorded at L1 (thus with a varying and unknown time lag). In essence, while typical ML models aim to answer the question What, our goal here is to answer both questions What and When. This project has been articulated around Mandar Chandorkar's Phd thesis 89 which has been defended this year in Eindhoven. The continuation of this collaboration is inseured by the hiring of Olivier Bui as a postdoc who's work has consisting in extending preliminary results on solar wind forecasting based on autoencoded solar magnetograms on a longer period of data corresponding to 2 solar cycles. Negative results have incited us to dig more into physical models of solar wind propagation and try to combine them with ML models in a systematic way.
8.5.2 Genomic Data and Population Genetics
Participants: Guillaume Charpiat, Flora Jay, Aurélien Decelle, Cyril Furtlehner
PhD: Théophile Sanchez, Jérémy Guez
PostDoc: Jean Cury, Burak Yelmen
Collaboration: Bioinfo Team (LRI), Estonian Biocentre (Institute of Genomics, Tartu, Estonia), UNAM (Mexico), U Brown (USA), U Cornell (USA), TIMCIMAG (Grenoble), MNHN (Paris), Pasteur Institute (Paris)
Thanks to the constant improvement of DNA sequencing technology, large quantities of genetic data should greatly enhance our knowledge about evolution and in particular the past history of a population. This history can be reconstructed over the past thousands of years, by inference from presentday individuals: by comparing their DNA, identifying shared genetic mutations or motifs, their frequency, and their correlations at different genomic scales. Still, the best way to extract information from large genomic data remains an open problem; currently, it mostly relies on drastic dimensionality reduction, considering a few wellstudied population genetics features.
For the past decades, simulationbased likelihoodfree inference methods have enabled researchers to address numerous population genetics problems. As the richness and amount of simulated and real genetic data keep increasing, the field has a strong opportunity to tackle tasks that current methods hardly solve. However, high data dimensionality forces most methods to summarize large genomic datasets into a relatively small number of handcrafted features (summary statistics).In 162, we propose an alternative to summary statistics, based on the automatic extraction of relevant information using deep learning techniques. Specifically, we design artificial neural networks (ANNs) that take as input single nucleotide polymorphic sites (SNPs) found in individuals sampled from a single population and infer the past effective population size history. First, we provide guidelines to construct artificial neural networks that comply with the intrinsic properties of SNP data such as invariance to permutation of haplotypes, long scale interactions between SNPs and variable genomic length. Thanks to a Bayesian hyperparameter optimization procedure, we evaluate the performance of multiple networks and compare them to well established methods like Approximate Bayesian Computation (ABC). Even without the expert knowledge of summary statistics, our approach compares fairly well to an ABC based on handcrafted features. Furthermore we show that combining deep learning and ABC can improve performance while taking advantage of both frameworks. Later, we experimented with other types of permutation invariance, based on similar architectures, and achieved a significative performance gain with respect to the state of the art, including w.r.t. ABC on summary statistics (20% gap), which means that we extract information from raw data that is not present in summary statistics. The question is now how to express this information in a humanfriendly way.
In the shortterm these architectures can be used for demographic inference or selection inference in bacterial populations (ongoing work with a postdoctoral researcher, J Cury, collab: Pasteur Institute, for ancient DNA: UNAM and U Brown); the longerterm goal is to integrate them in various systems handling genetic data or other biological sequence data. Regarding the bacterial populations, we already implemented a flexible simulator that will allow researchers to investigate complex evolutionary scenarios (e.g. dynamics of antibiotic resistance in 2D space through time) with realistic biological processes (bacterial recombination), which was impossible before (collab. U Cornell, MNHN) 13.
In collaboration with the Institute of Genomics of Tartu, we leveraged two types of generative neural networks (Generative Adversarial Networks and Restricted Boltzmann Machines) to learn the high dimensional distributions of real genomic datasets and create artificial genomes 27. These artificial genomes retain important characteristics of the real genomes (genetic allele frequencies and linkage, hidden population structure, ...) without copying them and have the potential to be valuable assets in future genetic studies by providing anonymous substitutes for private databases (such as the ones hold by companies or public institutes like the Institute of Genomics of Tartu. Ongoing work concerns scaling up to the full genome and developing new privacy scores.
We released dnadna, a flexible opensource pythonbased software for deep learning inference in population genetics7. It is taskagnostic and aims at facilitating the development, reproducibility, dissemination, and reusability of neural networks designed for genetic polymorphism data. dnadna defines multiple userfriendly workflows 66.
8.5.3 Privacy and synthetic data generation
Participants: Isabelle Guyon
PhD: Adrien Pavao
Collaboration: Kristin Bennett and Joe Pedersen (RPI, NY, USA), WeiWei Tu (4Paradigm, Chine), Pablo.Piantanida (CentraleSupelec)
Collecting and distributing actual medical data is costly and greatly restrained by laws protecting patients’ health and privacy. While beneficial, these laws severely limit access to medical data thus stagnating innovation and limiting research and educational opportunities. The process of obfuscation of medical data is costly and time consuming with high penalties for accidental release. Thus, we have engaged in developing and using realistic simulated medical data in research and in teaching. In 179 we develop metrics for measuring the quality of synthetic health data for both education and research. We use novel and existing metrics to capture a synthetic dataset's resemblance, privacy, utility and footprint. Using these metrics, we develop an endtoend workflow based on our generative adversarial network (GAN) method, HealthGAN, that creates privacy preserving synthetic health data. Our workflow meets privacy specifications of our data partner: (1) the HealthGAN is trained inside a secure environment; (2) the HealthGAN model is used outside of the secure environment by external users to generate synthetic data. In 178 we put the HealthGAN methodology that we developed in the previous paper to work in a practical setting. We reproduce the research outcomes obtained on two previously published studies, which used private health data, using synthetic data generated with a method that we developed, called HealthGAN. We demonstrate the value of our methodology for generating and evaluating the quality and privacy of synthetic health data. The dataset are from OptumLabs R Data Warehouse (OLDW). The OLDW is accessed within a secure environment and doesn't allow exporting of patient level data of any type of data, real or synthetic, therefore the HealthGAN exports a privacypreserving generator model instead. The studies examine questions related to comorbidites of Autism Spectrum Disorder (ASD) using medical records of children with ASD and matched patients without ASD. HealthGAN generates high quality synthetic data that produce similar results while preserving patient privacy. In 92, we extend existing timeseries generative models to generate medical data, which is challenging due to this influence of patient covariates. We propose a workflow wherein we leverage existing generative models to generate such data. We demonstrate this approach by generating synthetic versions of several timeseries datasets where static covariates influence the temporal values.
While theoretical criteria of privacy preservation, such as “differencial privacy” are important to gain insights into how to protect privacy, they are often impractical, because they put forward pessimistic bounds and impose degrading data and/or model to a point that hampers utility. Additionally, for all practical purposes, data owners seek to obtain guarantees that no privite information is leaked in the form of an empirical statistical test, rather than a more elusive theoretical guarantee. To that end, we have set to work on evaluating the effectiveness of privacy protection agains specific attacks, such as membership inference or attribute inference. We devised an evaluation apparatus called “LTUattacker” 37, in collaboration with Kristin Bennett, Joe Pedersen, and WeiWei Tu and with 2 interns (Rafel MonosGomez and Jiangna Huang) have obtained interesting preliminary results demonstrating lack of privacy preservation of most scikitlearn algorithms under membership inference attacks. New directions currently explored in collabotation with Pablo Piantanida include defining a degree of “privacy exposure” of particular individual involving information theoretic arguments.
With Master student Alice Lacan, we have been investigating the modelization of the Covid19 epidemic propogation using compartimental models, following earlier work by former master student Martin Cepeda. A group of students including Alice entered the "Pandemic response" XPrize and qualified for the final phase. This work was follwed by a paper on estimating uncertainty in time series, in application to prediciting the evolution of the number of Covid cases presented at the BayLearn 2022 conference 45. Alice was invited to give a presentation of this work at the WIDS 2023 conference.
8.5.4 Sampling molecular conformations
Participants: Guillaume Charpiat
PhD: Loris Felardos
Collaboration: Jérôme Hénin (IBPC), Bruno Raffin (InriAlpes)
Numerical simulations on massively parallel architectures, routinely used to study the dynamics of biomolecules at the atomic scale, produce large amounts of data representing the time trajectories of molecular configurations, with the goal of exploring and sampling all possible configuration basins of given molecules. The configuration space is highdimensional (10,000+), hindering the use of standard data analytics approaches. The use of advanced data analytics to identify intrinsic configuration patterns could be transformative for the field.
The highdimensional data produced by molecular simulations live on lowdimensional manifolds; the extraction of these manifolds will enable to drive detailed largescale simulations further in the configuration space. We study how to bypass simulations by directly predicting, given a molecule formula, its possible configurations. This is done using Graph Neural Networks 100 in a generative way, producing 3D configurations. The goal is to sample all possible configurations, and with the right probability. This year we studied various normalizing flow architectures as well as varied training criteria suitable for distributions (KullbackLeibler divergence in latent or sample space, in one direction or the other one, as it is not symmetric, but also pairwise distances, optimal transport, etc.). It turns out that mode collapse is frequently observed in most cases, even on simple tasks. Further analysis identified several causes for this, from which we built remedies.
8.5.5 Earthquake occurrence prediction
Participants: François Landes, Marc Schoenauer
PhD: Vincenzo Schimmenti
Collaboration: Alberto Rosso (LPTMS)
Earthquakes occur in brittle regions of the Crust typically located at the depth of 515 km and characterized by a solid friction, which is at the origin of the stickslip behaviour. Their magnitude distribution displays thecelebrated GutenbergRichter law and a significant increase of the seismic rate is observed after large events (called main shocks). The occurrence of the subsequent earthquakes in the same region, the aftershocks, obeys well established empirical laws that demand to be understood. A change in the seismic rate also happens before a main shock, with an excess of small events compared to the expected rate of aftershocks related to the previous main shock in that region. These additional events are defined as foreshocks of the coming main shock, however they are scarce so that defininig them is a very difficult task. For this reason their statistical fingerprint, so important for human secutiry, remains elusive. In this project we combine the techiniques of Statistical Physics and Machine Learning to determine the complex spatiotemporal patterns of the events produce by the dyanamics of the fault. In particular we plan to understand the structure of the short sequence of foreshocks, and their potential impact for human applications.
The treatment of rare events by Machine Learning is a challenging yet rapidly evolving domain. At TAU we have a great expertise in data modeling, in particular we are currently working on space weather forecast, a supervised task where, like in seismicity, extreme and rare events are crucial. Bayesian models and Restricted Boltzman Machines (RBMs) have been built to model these weather forecast data. These techniques, inspired from statistical physics, are both based on a probabilistic description of latent variables (i.e. unobserved variables) and have great expressiveness, allowing the modelling of a large span of data correlations. This kind of models can be extended to study spatially resolved earthquakes, the latent variable here being the local stress within the fault and in the ductile regions. Our goal is to characterize the statistical properties of a sequence of events (foreshocks, main shock and aftershocks) and predict its following history. We will first study the sequences obtained from simulations of the physical model 154. We will answer the following question: given a short sequence of foreshocks, can we predict the future of the sequence? How big will be the main shock? When will it occur? In a second step we will use also the data coming from real sequences, where events are unlabeled. These sequences are public and available (The most accurate catalog is for Southern California, a catalog with 1.81 million earthquakes. It is available at https://scedc.caltech.edu/researchtools/QTMcatalog.html). Concretely, the data consists in the earthquakes’ magnitude, occurrence time and hypocenter locations.
Two parallel directions are being explored, with our PhD Student, Vincenzo Schimmenti:
 The available data can be used to tune the parameters of the new model to improve its accuracy and generalization properties. We will adjust the parameters of the elastic and friction coefficients in order to produce earthquakes with realistic magnitudes. This will allow us to have information about the physical condition in the fault and in the ductile regions.
 We will use our understanding of foreshocks statistics to perform classification of earthquakes with respect to their nature: foreshock, main shock or after shock, and alignment (assignment of the earthquake to a sequence). These labels are known in the synthetic data and unknown in the catalogs, so this would be an instance of semisupervised learning. Our final goal is real data completion: presented with an incomplete catalog, the machine is asked to complete it with the missing points.
8.5.6 Reduced order model correction
Partecipants: Michele Alessandro Bucci, Marc Schoenauer
PhD: Emmanuel Menier
Collaboration: Mouadh Yagoubi (IRTSystemX)
Numerical simulations of fluid dynamics in industrial applications require the spatial discretization of complex 3D geometries with consequent demanding computational operations for the PDE integration. The computational cost is mitigated by the formulation of Reduced Order Models (ROMs) aiming at describing the flow dynamics in a low dimensional feature space. The Galerkin projection of the driving equations onto a meaningful orthonormal basis speeds up the numerical simulations but introduces numerical errors linked to the underrepresentation of dissipative mechanisms.
Deep Neural Networks can be trained to compensate missing information in the projection basis. By exploiting the projection operation, the ROM correction consists in a forcing term in the reduced dynamical system which has to i) recover the information living in the subspace orthonormal to the projection one ii) ensure that its dynamic is dissipative. A constrained optimization is then employed to minimize the ROM errors but also to ensure the reconstruction and the dissipative nature of the forcing. We tested this solution on benchmarked cases where it is well known that transient dynamics are poorly represented by ROMs. The results 69 show how the correction term improves the prediction while preserving the guarantees of the ROM.
8.5.7 Active Learning for chaotic systems
Participants: Michele Alessandro Bucci
Collaboration: Lionel Mathelin (LISN), Onofrio Semeraro (LISN), Sergio Chibbaro (UPMC), Alexander Allauzen (ESPCI)
The inference of a data driven model aiming at reproducing chaotic systems is challenging even for the most performing Neural Network architectures. According to the ergodic theory, the amount of data required to converge the invariant measure of a chaotic system goes exponentially with its intrinsic dimension. It follows that for learning the dynamics of a turbulent flow, the computing resources in the world would not be enough to store the necessary data. To circumvent such limitations we generally introduce constraints in the optimization stage in order to preserve physical invariants, when they are known.
In 88 we compared model quality when trained with and without ergodic time series generated by the Lorenz systems (i.e. the chaotic system related to the “butterfly effect”). The ergodic dataset is composed of one long trajectory (27000 time steps), whereas the non ergodic one is composed by 9 short trajectories (9000 time steps each) randomly initialized on the chaotic attractor. Despite the same amount of points, it turns out that the nonergodic dataset led to biased models. Short trajectories do not ensure statistical knowledge of the phase space. Exploiting the structure of the phase space, 9 trajectories (9000 time steps) emanated from the 3 fix points of the Lorenz systems have been used to generate a new dataset. The fix points and their unstable directions define the skeleton of the phase space. The trajectories emanated from them allow to reduce the entropy of the dataset without introducing bias in the learned models. A dataset incorporating the dynamics around the fix points, not only allows to obtain more robust models with respect to the initialization of the NN parameters but also allows to reduce the size of the dataset by 60% without affecting the quality of the models. Recent work 60 analyzes the amount of data that is sufficient for a priori guaranteeing a faithful model of the physical system.
8.5.8 Control of fluid dynamics with reinforcement learning
Participants: Michele Alessandro Bucci
Collaboration: Lionel Mathelin (LISN), Onofrio Semeraro (LISN), Thibaut Guegan (PPrime), Laurent Cordier (PPrime)
The control of fluid dynamics is an active research area given the implications of aerodynamic forces in the transport and energy field. Being able to delay the laminartoturbulent transition, stabilize unsteady mechanisms or reduce the pressure forces of an object moving in a fluid, would allow for more ecological vehicles or more efficient wind turbines. For quadratic objective functions and for conditions in which the linearized Navier Stokes equation is a good approximation of the fluid dynamics around the target state, optimal control theory provides the necessary tools (e.g. Riccati equation, directadjoint optimization) to recover a robust control policy. In the case of nonlinearizable systems, nonquadratic cost functions or in the absence of a model, these tools are no longer valid. Reinforcement learning algorithms allow us to solve the optimal problem even if the model is not available. The control problem, with an infinite time horizon, can be decomposed into local optimal problems if the system is completely observed and its dynamics is Markovian. The solution of the Bellman equation ensure the optimality of the policy if the phase space of the system has been fully explored 164.
We applied actorcritic algorithms (TD3) to control a benchmarked flow configuration: the PinBall case 118, 44. In the PinBall case, the flow impacting on three cylinders arranged at the vertices of an equilateral triangle generates an unstable wake that causes high aerodynamic forces. Allowing the cylinders to rotate, the RL algorithm provides a control policy capable of reducing the drag by 60% compared to the uncontrolled case. We have also shown how partial observation of the flow velocity field through sensors is not a limiting factor if a temporal state embedding is considered. By reducing the number of sensors and increasing the size of the state with past observations, the efficiency of the policy is not degraded.
8.6 Challenges
Participants: Cécile Germain, Isabelle Guyon, Adrien Pavao, AnneCatherine Letournel, Marc Schoenauer, Michèle Sebag
PhD: Zhengying Liu, Balthazar Donon, Adrien Pavao, Haozhe Sun, Romain Egele
Engineer: Sébastien Tréguer.
Collaborations: D. Rousseau (LAL), André Elisseeff (Google Zurich), JeanRoch Vilmant (CERN), Antoine Marot and Benjamin Donnot (RTE), Kristin Bennett (RPI), Magali Richard (Université de Grenoble), WeiWei Tu (4Paradigm, Chine), Sergio Escalera (U. Barcelona, Espagne).
The Tau group uses challenges (scientific competitions) as a means of stimulating research in machine learning and engage a diverse community of engineers, researchers, and students to learn and contribute advancing the stateoftheart. The Tau group is community lead of the opensource Codalab platform (see Section 7), hosted by Université ParisSaclay. The project had grown in 2019 and includes now an engineer dedicated full time to administering the platform and developing challenges (Adrien Pavao), financed by a new project just starting with the Région IledeFrance. This project will also receive the support of the Chaire Nationale d'Intelligence Artificielle of Isabelle Guyon for the next four years.
Our doctoral student Adrien Pavao has set to work on the theoretical rationalization of judging competitions. A first work he published mode ties between this problem and the theory of social choice 36. This is applicable, in particular to judging multitask or multiobjective challenges: each task or objective can be thought of as a “judge” voting towards determining a winner. He devised novel empirical criteria to assess the quality of ranking functions, including the generalization to new tasks and the stability under judge or candidate perturbation and conducted empirical comparisons on 5 competitions and benchmarks. While prior theoretical analyses indicate that no single ranking function satisfies all desired theoretical properties, our empirical study reveals that the classical "average rank" method (often used in practice to judge competitions) fares well. However, some pairwise comparison methods can get better empirical results.
Following the highly successful ChaLearn AutoML Challenges (NIPS 2015 – ICML 2016 120 – PKDD 2018 121), a series of challenges on the theme of AutoDL138 was run in 2019 (see http://autodl.chalearn.org, addressing the problem of tuning the hyperparameters of Deep Neural Networks, including the topology of the network itself. Cosponsored by Google Zurich, it required participants to upload their code on the Codalab platform. The series included two challenges in computer vision called AutoCV and AutoCV2, to promote automatic machine learning for image and video processing, in collaboration with University of Barcelona 140. It also included challenges in speech processing (AutoSpeech), text processing (AutoNLP), weakly supervised learning (AutoWeakly) and times series (AutoSeries), coorganized with 4Paradigm. It culminated with launching the AutoDL challenge combining multiple modalities (presently ongoing). The winners of each challenge opensourced their code. GPU cloud resources were donated by Google. AutoDL was an official NeurIPS 2020 competition. The challenge series is continuing beyouns AutoDL, with the AutoGraph challenge that was organized for KDD 2020 https://www.automl.ai/competitions/3 and the newly started MetaLearning challenge series https://metalearning.chalearn.org/, whose first edition took place in conjuction with AAAI 2021. A new challenge on automated reinforcement learning (AutoRL) is currently under design.
A new challenge series in Reinforcement Learning was started with the company RTE France, one the theme “Learning to run a power network” 145 (L2RPN, http://l2rpn.chalearn.org). The goal is to test the potential of Reinforcement Learning to solve a real world problem of great practical importance: controlling electricity transportation in smart grids while keeping people and equipment safe. The first edition was run in Spring 2019 and was part of the official selection of the IJCNN 2019 conference. It ran on the Codalab platform coupled with the open source PyPower simulator of power grids interfaced with the Opengym RL framework, developed by OpenAI. In this gamified environment, the participants had to create a proper controller of a small grid of 14 nodes. Not all of them used RL, but some combinations of RL and human expertise proved to be competitive. In 2020, we launched a new edition of the challenge with a more powerful simulator rendering the grid more realistic and capable of simulating a 118node grid within our computational constraints. This competition was accepted as part of the official program of NeurIPS 2020 144.While first competitions aimed at demonstrating the feasibility of applying Reinforcement Learning for controlling electrical flows on a power grid, the NeurIPS competition introduced a realisticallysized grid environment along with two fundamental reallife properties of power grid systems to reconsider while shifting towards a sustainable world: robustness and adaptability. The analysis paper is under review. Last but not least, within the European project TAILOR, the TAU team is responsible for organizing challenges, and a further edition of the L2RPN dealing with changing topology is being coorganized with RTE and TAILOR challenge task force.
The COMETH project (EIT Health) aims to run a series of challenges to promote and encourage innovations in data analysis and personalized medicine. Université de Grenoble organized a challenge on the newly developed Codabench platform (https://www.codabench.org/). The challenge gathered transdisciplinary instructors (researchers and professors), students, and health professionals (clinicians). The COMETH project aimed at creating benchmarks permitting practitionners to gain access to advanced algorithms provided by machine learning researchers. We developped a WebApp, which interfaces Codabench with a simplified interface designed for Physicians and which makes robot submissions to Codabench on their behalf. As a synergistic activity, Tau is also engaged in a collaboration with the Rensselaer Polytechnic Institute (RPI, NewYork, USA) to use challenges in the classroom, as part of their healthinformatics curriculum.
We have also shared our expertise (and made our challenge platform Codalab available) to support two other NeurIPS challenges: The Black Box Optimization for Machine Learning challenge https://bbochallenge.com/, which was then used as paart of a class of optimization of the M2 AI of université ParisSaclay; and the Predicting Generalization in Deep Learning challenge https://sites.google.com/view/pgdl2020. The latter case is remarkable: Google research selected our platform Codalab to run their challenge, despite the fact that they bought a competing commercial platform (Kaggle). Codalab was also choosen for the second phase of the TrackML challenge, in collaboration with LHC experiments. The goal was to build an algorithm that quickly reconstructs particle tracks from 3D points left in the silicon detectors 73. Recent work 55 indicates that the specific issue of extremely poorly separated classes should be addressed through a combination datasetlevel inference and iterative refinement of the particle selection.
The Paris IledeFrance project also took off this year. Codalab and the Tau team were selected to organize the industry machine learning challenge seris of the Paris REgion. Adrien Pavao, who was the project leader, organized with Dassault aviation a project of “jumeau numerique”, aiming at performing predictive maintenance on airplanes. The Paris Region offered 500K Euros to the winner, a startup, which would then collaborate with Dassault to productize the solution. The challenge took place from February 2021 to May 2021. The results have indicated that, on such problems of time series regression, ensembles of decision trees such as XGBoost dominate over DL methods. This result, which came somewhat as a surprise, mais stem from the massive amount of data that had to be processed. Despite the significant compute power made avaliable (10 GPUs for 2 days), search for optimal architectures was difficult. Results of detailed analyses conducted by a consortium of organizers and participants have been published 35. This challenge has demonstrated that Codalab is now “industry grade”, and has paved the way to organizing other AI for Industry challenges. We have currently in preparation a challenge targeting carbonneutrality by 2025, in collaboration with RTEFrance under way.
It is important to introduce challenges in ML teaching. This has been done (and is ongoing) in I. Guyon's Master courses 151 : some assignments to Master students are to design small challenges, which are then given to other students in labs, and both types of students seem to love it. Codalab has also been used to implement reinforcement learning homework in the form of challenges by Victor Berger and Heri Rakotoarison for the class of Michèle Sebag. New directions being explored by students in 2021 include takling fairness and bias in data.
In terms of dissemination, a collaborative book “AI competitions and benchmarks: The science behind the contests ” written by expert challenge organizers is under way and will appear in the Springer series on challenges in machine learning, see http://www.chalearn.org/books.html.
9 Bilateral contracts and grants with industry
9.1 Bilateral contracts with industry
Tau continues its policy about technology transfer, accepting any informal meeting following industrial requests for discussion (and we are happy to be often solicited), and deciding about the followup based upon the originality, feasibility and possible impacts of the foreseen research directions, provided they fit our general canvas. This lead to the following 3 ongoing CIFRE PhDs, with the corresponding sidecontracts with the industrial supervisor, one bilateral contract with IFPEN, one recently started bilateral contract with Fujitsu (within the national "accordcadre" Inria/Fujitsu), plus at least two new CIFRE PhDs, one with our longlasting partner RTE, and one with Ekimetrics company, with whom we have never worked before), that will start in 2022.

CIFRE Thalès 20182021 (45 kEuros), with Thales Teresis, related to Nizam Makdoud's CIFRE PhD
Coordinator: Marc Schoenauer and Jérôme Kodjabatchian
Participants: Nizam Makdoud

CIFRE RTE 20182021 (72 kEuros), with Réseau Transport d'Electricité, related to Balthazar Donon's CIFRE PhD
Coordinator: Isabelle Guyon and Antoine Marot (RTE)
Participants: Balthazar Donon, Marc Schoenauer

CIFRE FAIR 20182021 (72 kEuros), with Facebook AI Research, related to Leonard Blier's CIFRE PhD
Coordinator: Marc Schoenauer and Yann Olliver (Facebook)
Participants: Guillaume Charpiat, Michèle Sebag, Léonard Blier

IFPEN (Institut Français du Pétrole Energies Nouvelles) 20192023 (300 kEuros), to hire an Inria Starting Research Position (Alessandro Bucci) to work in all topics mentioned in Section 3.2 relevant to IFPEN activity.
Coordinator: Marc Schoenauer
Participants: Alessandro Bucci, Guillaume Charpiat

Fujitsu, 20212022 (200k€), Causal discovery in high dimensions
Coordinator: Marc Schoenauer
Participants: Shuyu Dong and Michèle Sebag
10 Partnerships and cooperations
10.1 European initiatives
10.1.1 FP7 and H2020 projects

H2020 RIA TRUSTAI 20202024 (475k€) dedicated to building trustworthy explainable AI using Humancentered Genetic Programming.
Coordinator: Gonçalo Figueira (INESC, Portugal)
Participants: Marc Schoenauer and Alessandro Leite.

H2020 ICT48 European network of AI excellence centresTAILOR 20202024 (400 k€).
Coordinator: Fredrik Heintz, Linköping U., Sweden.
Participants: Marc Schoenauer (WP2 leader), Isabelle Guyon, and Sébastien Treguer.
Other Inria teams: Lacodam, Multispeech and exOrpailleur.

H2020 ICT48 CSA VISION,
Coordinator Holger Hoos (Leiden U. The Netherlands)
Participants: Marc Schoenauer (Inria PI: Joost Geurst, DPE).
10.2 National initiatives
10.2.1 ANR

Chaire IA HUMANIA 20202024 (600kEuros), Democratizing Artificial Intelligence (Section 8.1).
Coordinator: Isabelle Guyon (TAU)
Participants: Marc Schoenauer, Michèle Sebag, AnneCatherine Letournel, François Landes.

HUSH 20202023 (348k euros), HUman Supply cHain behind smart technologies.
Coordinator : Antonio A. Casilli (Telecom Paris)
Participants: Paola Tubaro

SPEED 20212024 (49k€) Simulating Physical PDEs Efficiently with Deep Learning
Coordinator: Lionel Mathelin (LIMSI)
Participants: Michele Alessandro Bucci, Gullaume Charpiat, Marc Schoenauer.

RoDAPoG 20212025 (302k€) Robust Deep learning for Artificial genomics and Population Genetics
Coordinator:Flora Jay,
Participants: Cyril Furtlehner, Guillaume Charpiat.
10.2.2 Others

ADEME NEXT 20172021 (675 kEuros). Simulation, calibration, and optimization of regional or urban power grids (Section 4.2).
ADEME (Agence de l'Environnement et de la Maîtrise de l'Energie)
Coordinator: SME ARTELYS
Participants Isabelle Guyon, Marc Schoenauer, Michèle Sebag, Victor Berger (PhD), Herilalaina Rakotoarison (PhD), Berna Bakir Batu (Postdoc)

PIA JobAgile 20182021 (379 kEuros) Evidencebased Recommandation pour l’Emploi et la Formation (Section 8.3.1).
Coordinator: Michèle Sebag and Stéphanie Delestre (Qapa)
Participants: Philippe Caillou, Isabelle Guyon

BOBCAT The new BtOB work intermediaries: comparing business models in the "CollaborATive" digital economy, 20182021 (100k euros), funded by DARES (French Ministry of Labor).
Coordinator : Odile Chagny (IRES)
Participants: Paola Tubaro

IPL$\phantom{\rule{0.166667em}{0ex}}$ HPCBigData 20182022 (100 kEuros) High Performance Computing and Big Data (Section 8.5.4)
Coordinator: Bruno Raffin (Inria Grenoble)
Participants: Guillaume Charpiat, Loris Felardos (PhD)

Inria Challenge (formerly IPL) HYAIAI, 20192023, HYbrid Approaches for Interpretable Artificial Intelligence
Coordinator: Elisa Fromont (Lacodam, Inria Rennes)
Participants: Marc Schoenauer and Michèle Sebag

TRIA Le TRavail de l'Intelligence Artificielle : éthique et gouvernance de l'automation, 20202021 (131k euros), funded by MITICNRS (CNRS mission for interdisciplinary and transverse initiatives)).
Coordinator : Paola Tubaro
Participants: A.A. Casilli (Telecom Paris); I. Vasilescu, L. Lamel, Gilles Adda (CNRSLimsi); N. Seghouani (LRI); T. Allard, David GrossAmblard (Irisa); J.L. Molina (UAB Barcelona); J.A. Ortega (Univ. València); J. Posada (Univ. Toronto)

Les vraies voix de l'Intelligence Artificielle, 20212023 (29k euros), funded by Maison des Sciences de l'Homme ParisSaclay.
Coordinator : Paola Tubaro
Participants: A.A. Casilli (Telecom Paris); I. Vasilescu, L. Lamel, Gilles Adda (CNRSLISN); J.L. Molina (UAB Barcelona); J.A. Ortega (Univ. València)

Inria Challenge OceanAI 20212025, AI, Data, Models for a Blue Economy
Coordinator: Nayat Sanchez Pi (Inria Chile)
Participants: Marc Schoenauer, Michèle Sebag and Shiyang Yan
11 Dissemination
11.1 Promoting scientific activities
11.1.1 Scientific events: organisation
Member of the organizing committees
 Marc Schoenauer  Steering Committee, Parallel Problem Solving from Nature (PPSN); Steering Committee, Learning and Intelligent OptimizatioN (LION).
 Cecile Germain  Steering committee of the Learning to Discover program of Institut Pascal (originally 2020, postponed to 2022)
 Flora Jay  Organizer of Thematic School “Graph as models in life sciences: Machine learning and integrative approaches” (supported by Digicosme)
11.1.2 Scientific events: selection
Chair of conference program committees
 Flora Jay, cochair at Probgen, conference in Probabilistic Modeling In Genomics, Apr 2021
 Marc Schoenauer, Area Chair, ECML/PKDD 2021
 Michele Sebag, Senior Area Chair IJCAI 2021, Area Chair NeurIPS 2021, Area Chair ICML 2021
Reviewer
All TAU members are reviewers of the main conferences in their respective fields of expertise.
11.1.3 Journal
Member of the editorial boards
 Isabelle Guyon  Action editor, Journal of Machine Learning Research (JMLR); series editor, Springer series Challenges in Machine Learning (CiML).
 Marc Schoenauer  Advisory Board, Evolutionary Computation Journal, MIT Press, and Genetic Programming and Evolutionary Machines, Springer Verlag; Action editor, Journal of Machine Learning Research (JMLR); Editorial Board, ACM Transaction on Evolutionary Learning and Optimization (TELO).
 Michèle Sebag  Editorial Board, Machine Learning, Springer Verlag; ACM Transactions on Evolutionary Learning and Optimization.
 Paola Tubaro: Sociology, Revue française de sociologie, Journal of Economic Methodology, Lecturas de Economia.
Reviewer  reviewing activities
All members of the team reviewed numerous articles for the most prestigious journals in their respective fields of expertise.
11.1.4 Invited talks
 Guillaume Charpiat, Input similarity from the neural network perspective, IHES annual workshop "Journée statistique et informatique de ParisSaclay", 5 February 2021
 Guillaume Charpiat, Réseaux de neurones profonds pour la segmentation et le recalage d'images satellitaires, au séminaire "L'intelligence artificielle en cartographie", Maison des Sciences de l'Homme Val de Loire, projet Veccar, 8 April 2021
 Flora Jay and Aurélien Decelle, Creating artificial human genomes using generative neural networks, Synthetic Data for Health Symposium (CIFAR, Ivado, MILA), Canada/online, 25 Nov 2021
 Flora Jay, Reconstructing past demography and augmenting the diversity of publicly available genomes with exchangeable and generative neural networks, GDR BIM, Lyon, 24 Nov 2021
 Flora Jay, Factor analysis of ancient population genomic samples, Ancient DNA symposium Institut Pasteur, Paris, 4 Nov 2021
 Flora Jay, Generative and exchangeable neural networks for population genetics 14th NICEseq Seminar  AI & Genomics 17 Sept 2021
 Flora Jay, symposium Machinelearning applications in population genetics and phylogenomics, SMBE congress 48 July 2021
 Flora Jay, minisymposium AI and data science for biology. iBio Initiative and SCAI Institute, Paris 23 June 2021
 Flora Jay, Neural networks for population genetics: demographic inference and data generation Probgen conference 1416 April 2021
 Flora Jay, Seminars at UHPalaeopopgen webminar series (27 Jan 21); Technical University Munich, Germany (4 Feb 21); Imperial College London (11 Feb 21)
 Marc Schoenauer, Communication about AI: Distinguish real dangers from Irrational fears, Science&You, Metz 16 Nov. 2021
 Marc Schoenauer, Explainable Reinforcement Learning with MultiObjective Genetic Programming in the TRUSTAI project, DATAIA wkp "Safety and AI", 13 Dec. 2021
 Michele Sebag, Analyser, comprendre le monde: Complémentarité entre apprentissage et visualisation, with JeanDaniel Fekete, AFIAIHM, 11 Mars 2021
 Michele Sebag, Towards causal modeling of nutritional outcomes, Univ. Ulster, June 28, 2021
 Michele Sebag, Causal Modeling & Some Applications, kickoff meeting of the Oceania Challenge, July 1st, 2021
 Michele Sebag, Synthèse et position, Colloque interdisciplinaire Qu’estce qui échappe à l’IA?, LINX, 21 septembre 2021
 Michele Sebag, Extremely privated supervised learning, ERCIMJST, 8 December 2021
 Paola Tubaro, Networks in the digital organization, keynote, European Social networks Conference (EUSN 2021), Naples, 9 September 2021
 Paola Tubaro, Ethical issues of AI, inaugural workshop of the SeCoIA Deal European project, 9 December 2021
 Paola Tubaro, Learners in the loop: The hidden human contribution to artificial intelligence, Resituating Learning Conference, University of Siegen, 29 October 2021
 Paola Tubaro, La visualisation du réseau personnel, Catholic University of Louvain, 25 June 2021
 Paola Tubaro, El trabajo de la inteligencia artificial, Universitat Autònoma de Barcelona, 30 April 2021
 Paola Tubaro, Algorithmes, inégalités, et les « humains dans la boucle », Académie des technologies, 10 March 2021
11.1.5 Leadership within the scientific community
 Guillaume Charpiat: creation and coanimation of 2 DigiCosme working groups on the Saclay plateau and beyond: vrAI (verification and robustness of AI) and SNAP (simulations numériques et apprentissage)
 Isabelle Guyon: Member of the board, NeurIPS; Member of the Board, JEDI, Joint European Disruptive Initiative; President and cofounder, ChaLearn, nonforprofit organization dedicated to the organization of challenges.
 Marc Schoenauer: Advisory Board, ACMSIGEVO, Special Interest Group on Evolutionary Computation; Founding President (since 2015), SPECIES, Society for the Promotion of Evolutionary Computation In Europe and Surroundings, that organizes the yearly series of conferences EvoStar.
 Michèle Sebag: Executive Committee, Institut de Convergence DataIA; Member of IRSN Scientific Council.
11.1.6 Scientific expertise
 Guillaume Charpiat: CRCN/IFSP hiring committee at INRIA Saclay
 Guillaume Charpiat: MdC hiring committee at LISN, ParisSaclay (MCF 1632)
 Guillaume Charpiat: member of the Commission Scientifique (CS) at INRIA Saclay (PhD/postdocs grant allocations)
 Guillaume Charpiat: Jean Zay (GENCI/IDRIS) committee member for resource allocation (GPU) demand expertise
 Flora Jay, CR hiring committee, INRAE Toulouse
 Flora Jay, MdC hiring committee, LIX
 Marc Schoenauer, Scientific Advisory Board, BCAM, Bilbao, Spain
 Marc Schoenauer, "Conseil Scientifique", IFPEN
 Marc Schoenauer, "Conseil Scientifique", Mines Paritech
 Marc Schoenauer, "Commission Recherche", Université ParisDiderot
 Michele Sebag, UDOPIA jury (PhDs)
 Michele Sebag, FNRS (PhDs and Postdocs)
 Michele Sebag, professorship hiring committee, Grenoble Alpes
 Michele Sebag, HCERES LS2N  Laboratoire des sciences du numérique à Nantes, Mai 2021
 Paola Tubaro, MdC hiring committee, University of Lille
 Paola Tubaro, professorship hiring committee, Sorbonne Université
 Paola Tubaro, associate professorship hiring committee, University of Greenwich (UK)
 Paola Tubaro, assistant professorship hiring committee, University of Insubria (IT)
11.1.7 Research administration
 Guillaume Charpiat: head of the Data Science department at LISN, Université ParisSaclay
 Michele Sebag, elected member of Lab. Council, LISN, Université ParisSaclay
 Paola Tubaro, member of Local Committee of Institut Pascal, Université ParisSaclay
11.2 Teaching  Supervision  Juries
11.2.1 Teaching
 Licence : Philippe Caillou, Computer Science for students in Accounting and Management, 192h, L1, IUT Sceaux, Univ. Paris Sud.
 Licence : François Landes, Mathematics for Computer Scientists, 51h, L2, Univ. ParisSud.
 Licence : François Landes, Introduction to Statistical Learning, 88h, L2, Univ. ParisSud.
 Licence : Isabelle Guyon: Introduction to Data Science, L1, Univ. ParisSud.
 Licence : Isabelle Guyon, Project: Resolution of minichallenges (created by M2 students), L2, Univ. ParisSud.
 Master : François Landes, Machine Learning, 34h, M1 Polytech, U. Parissud.
 Master : François Landes, A first look inside the ML black box, 25h, M1 Recherche (AI track), U. ParisSud.
 Master : Machine Learning, 28h, M2 Univ. Parissud, physics department
 Master : Guillaume Charpiat, Deep Learning in Practice, 21h, M2 Recherche, CentraleSupelec + MVA.
 Master : Guillaume Charpiat, Graphical Models: Discrete Inference and Learning, 9h, M2 Recherche, CentraleSupelec + MVA.
 Master : Guillaume Charpiat, Information Theory, 14h, M1 IA ParisSud.
 Diplôme universitaire: Guillaume Charpiat, Introduction au Deep Learning, 1h30, DU IA, CHU Lille.
 Master : Isabelle Guyon, Project: Creation of minichallenges, M2, Univ. ParisSud.
 Master : Michèle Sebag, Deep Learning, 4h; Reinforcement Learning, 12h; M2 Recherche, U. ParisSud.
 Master : Paola Tubaro, Sociology of social networks, 24h, M2, EHESS/ENS.
 Master : Paola Tubaro, Social and economic network science, 24h, M2, ENSAE.
 Master: Paola Tubaro, Ethics of social and digital data, 12h, Université de Toulouse Jean Jaurès
 Master : Flora Jay, Population genetics inference, 11h, M2, U PSaclay.
 Master : Flora Jay, Machine Learning in Genomics, 6h, M2, PSL.
 Master : Isabelle Guyon, Coordination du M1 et M2 [AI], U PSaclay.
 Master : Isabelle Guyon, M1 [AI] project A class (challenge organization)
 Master : Isabelle Guyon, M2 [AI] Advanced Optimization and Automated Machine Learning.
 INRIADFKI summer school on AI: Guillaume Charpiat, Formal verification of deep learning: theory and practice, July 23rd.
 INRIADFKI summer school on AI: Michele Sebag, Causal Learning, July 2021 (3h).
 Fall school : Flora Jay, Inference using full genome data, 7h, TUM, Germany.
11.2.2 Supervision
 PhD  Victor BERGER, Variational Anytime Simulator, 13/10/2021, Michèle Sebag
 PhD  Tony BONNAIRE, Reconstruction de la toile cosmique, 16/10/2021, Nabila Aghanim (Institut d'Astrophysique Spatiale) and Aurélien Decelle (thèse de l'IAS 85)
 PhD  Julien GIRARD, Vérification et validation des techniques d’apprentissage automatique, 9/11/2021, Zakarian Chihani (CEA) and Guillaume Charpiat
 PhD  Zhengying LIU, Automation du design des reseaux de neurones profonds, 9/11/2021, Isabelle Guyon and Michèle Sebag
 PhD in progress  Guillaume BIED, Valorisation des Données pour la Recherche d’Emploi, 1/10/2019, Bruno Crepon (CRESTENSAE) and Philippe Caillou
 PhD in progress  Leonard BLIER, Vers une architecture stable pour les systèmes d’apprentissage par renforcement, 1/09/2018, Yann Ollivier (Facebook AI Research, Paris) and Marc Schoenauer
 PhD in progress  Balthazar DONON, Deep Statistical Solvers and Power Systems Applications, 1/10/2018, Isabelle Guyon, Marc Schoenauer, and Rémy Clément (RTE)
 PhD in progress  Loris FELARDOS, Neural networks for molecular dynamics simulations, 1/10/2018, Guillaume Charpiat, Jérôme Hénin (IBPC) and Bruno Raffin (InriAlpes)
 PhD in progress  Giancarlo FISSORE, Statistical physics analysis of generative models, 1/10/2017, Aurélien Decelle and Cyril Furtlehner
 PhD in progress  Jérémy GUEZ , Statistical inference of cultural transmission of reproductive success, 1/10/2019, Evelyne Heyer (MNHN) and Flora Jay
 PhD in progress  Armand LACOMBE, Recommandation de Formations: Application de l'apprentissage causal dans le domaine des ressources humaines, 1/10/2019, Michele Sebag and Philippe Caillou
 PhD in progress  Wenzhuo LIU, Machine Learning for Numerical Simulation of PDEs, from 1/11/2019, Mouadh Yagoubi (IRT SystemX) and Marc Schoenauer
 PhD in progress  Emmanuel MENIER, Complementary Deep Reduced Order Model, from 1/9/2020, Michele Alessandro Bucci and Marc Schoenauer
 PhD in progress  Mathieu NASTORG, Machine Learning enhanced resolution of NavierStokes equations on general unstructured grids, 4/1/2021, Guillaume Charpiat and Michele Alessandro Bucci.
 PhD in progress  Adrien PAVAO, Theory and practice of challenge organization, from 1/03/2020, Isabelle Guyon.
 PhD in progress  Herilalaina RAKOTOARISON, Automatic Algorithm Configuration for Power Grid Optimization, 1/10/2017, Marc Schoenauer and Michèle Sebag
 PhD in progress  Théophile SANCHEZ, Reconstructing the past: deep learning for population genetics, 1/10/2017, Guillaume Charpiat and Flora Jay
 PhD in progress  Vincenzo SCHIMMENTI, Eartquake Predictions: Machine Learned Features using Expert Models Simulations, from 1/11/2020, François Landes and Alberto Rosso (LPTMS)
 PhD in progress  Marion ULLMO, Reconstruction de la toile cosmique, from 1/10/2018, Nabila Aghanim (Institut d'Astrophysique Spatiale) and Aurélien Decelle
 PhD  Elinor WAHAL, Microwork for AI in health applications, from 1/1/2020 (renounced 30/11/2021), Paola Tubaro
 PhD in progress  Assia Wirth, Coloniality of the production of facial recognition technologies, started 01/04/2021, Paola Tubaro
11.2.3 Juries
 Flora Jay: PhD, E Kerdoncuff (Sorbonne Université) Méthodes d’inférence démographique récente utilisant les polymorphismes et leur liaison génétique ; PhD, K Shimagaki (Sciences Sorbonne Université) Advanced statistical modeling and variable selection for protein sequences ; PhD, R Menegaux (PSL Université, Mines ParisTech) Continuous embeddings for largescale machine learning with DNA sequences
 Marc Schoenauer, PhD Cornero Maceda, LIMSI ; PhD Filipe Guerreiro Assunção, U. Coimbra, Portugal ; PhD committee Kaitlin Mailhe, Université Toulouse 1 Capitole
 Michele Sebag, HdR Philippe Esling, IRCAM ; PhD Luciano di Palma, LIX ; PhD JeanBaptiste Gouray, Univ. d'Artois
 Paola Tubaro: PhD, A. BouadjoBoulic (Université Toulouse I Capitole), Génération multiagents de réseaux sociaux
 Paola Tubaro: PhD, N. Révai (Université de Strasbourg), The dynamics of teachers’ professional knowledge in social networks
11.3 Popularization
11.3.1 Internal or external Inria responsibilities
 Marc Schoenauer, Deputy Research Director in charge of AI
 Marc Schoenauer, sherpa for Inria as pilot institution of the PEPRIA (together with CEA and CNRS)
11.3.2 Articles and contents
 Flora Jay, entretien radiophonique, Génomique et IA : les liaisons fructueuses, La méthode scientifique, France Culture, 12 Jan 2021
 Flora Jay, entretien, Sur la piste des génomes artificiels, par Sebastián Escalón, Journal du CNRS, 22/11/2021
 Une intelligence artificielle fabrique de l’ADN pour la première fois par Sofia Gavilan, Sciences et Vie, 22/02/2021
 La première intelligence artificielle capable de créer des génomes humains par Camille Gaubert, Sciences et Avenir, 12/02/2021
 Michèle Sebag, vidéo, exposition sur l'intelligence artificielle (Institut Henri Poincaré, Maison des Mathématiques et de l’Informatique de Lyon), 17 juin 2021
11.3.3 Interventions
 Flora Jay, intervention à l'école d’été de ParisSaclay filles en science à destination de collégiennes et lycéennes, 22 et 29 Jun 2021
12 Scientific production
12.1 Major publications
 1 articleThe Higgs Machine Learning Challenge.Journal of Physics: Conference Series6647December 2015
 2 inproceedingsAdaptive Operator Selection with Dynamic MultiArmed Bandits.Proc. Genetic and Evolutionary Computation Conference (GECCO)ACMSIGEVO 10years Impact AwardACM2008, 913920
 3 articleCyclebased Cluster Variational Method for Direct and Inverse Inference.Journal of Statistical Physics1643August 2016, 531574
 4 articleThe Grand Challenge of Computer Go: Monte Carlo Tree Search and Extensions.Communications ACM5532012, 106113
 5 incollectionLearning Functional Causal Models with Generative Neural Networks.Explainable and Interpretable Models in Computer Vision and Machine LearningSpringer Series on Challenges in Machine Learninghttps://arxiv.org/abs/1709.05321Springer International Publishing2018
 6 inproceedingsMixed batches and symmetric discriminators for GAN training.ICML  35th International Conference on Machine LearningStockholm, SwedenJuly 2018
 7 articleConvolutional Neural Networks for LargeScale Remote Sensing Image Classification.IEEE Transactions on Geoscience and Remote Sensing5522017, 645657
 8 articleAlors: An algorithm recommender system.Artificial Intelligence244Published online Dec. 20162017, 291314
 9 articleInformationGeometric Optimization Algorithms: A Unifying Picture via Invariance Principles.Journal of Machine Learning Research18182017, 165
 10 articleData Stream Clustering with Affinity Propagation.IEEE Transactions on Knowledge and Data Engineering2672014, 1
12.2 Publications of the year
International journals
 11 articleInfluence of freestream turbulence on the flow over a wall roughness.Physical Review Fluids662021, 063903
 12 articleCartolabe: A WebBased Scalable Visualization of Large Document Collections.IEEE Computer Graphics and Applications412April 2021, 7688
 13 articleSimulation of bacterial populations with SLiM.Peer Community JournalJanuary 2022
 14 articleCascade of phase transitions for multiscale clustering.Physical Review E 1031January 2021
 15 articleRegularization of Mixture Models for Robust Principal Graph Learning.IEEE Transactions on Pattern Analysis and Machine IntelligenceNovember 2021, 11
 16 articleExact Training of Restricted Boltzmann Machines on Intrinsically Low Dimensional Data.Physical Review LettersSeptember 2021
 17 articleAdvances in MetaDL: AAAI 2021 challenge and workshop.Proceedings of Machine Learning Research2021
 18 articleShortterm Forecasting of Urban Traffic using SpatioTemporal Markov Field.IEEE Transactions on Intelligent Transportation Systems2021, 10
 19 articleWinning solutions and postchallenge analyses of the ChaLearn AutoDL challenge 2019.IEEE Transactions on Pattern Analysis and Machine IntelligenceApril 2021
 20 articleBlackBox Optimization Revisited: Improving Algorithm Selection Wizards through Massive Benchmarking.IEEE Transactions on Evolutionary Computation2021
 21 articleWho bears the burden of a pandemic? COVID19 and the transfer of risk to digital platform workers.American Behavioral ScientistJanuary 2022
 22 articleHidden inequalities: the gendered labour of women on microtasking platforms.Internet Policy Review1112022
 23 articleDisembedded or Deeply Embedded? A MultiLevel Network Analysis of Online Labour Platforms.Sociology555October 2021, 927944
 24 articleSocial network analysis: New ethical approaches through collective reflexivity. Introduction to the special issue of Social Networks.Social Networks67August 2021, 18
 25 articleWhose results are these anyway? Reciprocity and the ethics of “giving back” after social network research.Social Networks67August 2021, 6573
 26 articleEncoding large scale cosmological structure with Generative Adversarial Networks.Astronomy and Astrophysics  A&AJuly 2021
 27 articleCreating artificial human genomes using generative neural networks.PLoS GeneticsFebruary 2021
International peerreviewed conferences
 28 inproceedingsZoetrope Genetic Programming for Regression.GECCO 2021Lille, FranceACM pressJuly 2021, 776784
 29 inproceedingsOn the Identifiability of Hierarchical Decision Models.18th International Conference on Principles of Knowledge Representation and Reasoning (KR2021)Online, FranceInternational Joint Conferences on Artificial Intelligence OrganizationNovember 2021, 151162
 30 inproceedingsEquilibrium and nonEquilibrium regimes in the learning of Restricted Boltzmann Machines.NeurIPS 2021Proceedings NeurIPS 2021Vancouver, United StatesDecember 2021
 31 inproceedingsParadiseo: From a Modular Framework for Evolutionary Computation to the Automated Design of Metaheuristics: 22 Years of Paradiseo.2021 Genetic and Evolutionary Computation Conference CompanionGECCO 2021  Genetic and Evolutionary Computation Conference2021 Genetic and Evolutionary Computation Conference CompanionLille / Virtual, FranceACMJuly 2021, 15221530
 32 inproceedingsAgEBOTabular: Joint Neural Architecture and Hyperparameter Search with Autotuned DataParallel Training for Tabular Data.SC '21: Proceedings of the International Conference for High Performance Computing, Networking, Storage and AnalysisSt. Louis, Missouri, United StatesNovember 2021
 33 inproceedingsTowards causal modeling of nutritional outcomes.Causal Analysis Workshop Series (CAWS) 2021519online, United States2021
 34 inproceedingsMetaREVEAL: RLbased Metalearning from Learning Curves.Workshop on Interactive Adaptive Learning colocated with European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2021)Bilbao/Virtual, SpainSeptember 2021
 35 inproceedingsAircraft Numerical "Twin": A Time Series Regression Competition.ICMLA 2021  20th IEEE International Conference on Machine Learning and Applications.Pasadena / Virtual, United StatesDecember 2021
 36 inproceedingsJudging competitions and benchmarks: a candidate election approach.ESANN 2021  29th European Symposium on Artificial Neural NetworksBruges/Virtual, BelgiumOctober 2021
 37 inproceedingsLTU Attacker for Membership Inference.Third AAAI Workshop on PrivacyPreserving Artificial Intelligence (PPAI22)virtual, France2022
 38 inproceedingsLearning Metafeatures for AutoML.ICLR 2022  International Conference on Learning RepresentationsVirtual, United StatesApril 2022
 39 inproceedingsIterative Learning for Model Reactive Control: Application to Autonomous Multiagent Control.7th International Conference on Automation, Robotics and Applications (ICARA)Prague, FranceIEEEFebruary 2021, 140146
 40 inproceedingsOn Refining BERT Contextualized Embeddings using Semantic Lexicons.ECML PKDD 2021  Machine Learning with Symbolic Methods and Knowledge Graphs colocated with European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databaseshttp://ceurws.org/Vol2997/paper4.pdfOnline, SpainNovember 2021
National peerreviewed Conferences
 41 inproceedingsPartitionnement en régions linéaires pour la vérification formelle de réseaux de neurones.Journées Francophones des Langages ApplicatifsSaint Médard d’Excideuil, FranceApril 2021
 42 inproceedingsEXtremely PRIvate supervised Learning.Conférence d'APprentssage  CAPStEtienne, France2021
Conferences without proceedings
 43 inproceedingsCongestionAvoiding Job Recommendation with Optimal Transport.FEAST workshop ECMLPKDD 2021Bilbao, SpainSeptember 2021
 44 inproceedingsClosedloop control of complex systems using deep Reinforcement Learning.Euromech colloquium on Machine learning methods for turbulent separated flowsParis, FranceJune 2021
 45 inproceedingsMLCI: Machine Learning Confidence Intervals for Covid19 forecasts.BayLearn  Machine Learning Symposium 2021San Francisco, United StatesOctober 2021
 46 inproceedingsDesigning labor market recommender systems: the importance of job seeker preferences and competition.4. IDSC of IZA Workshop: Matching Workers and Jobs Online  New Developments and Opportunities for Social Science and PracticeOnline, FranceOctober 2021
Scientific books
 47 bookOcéanIA: AI, Data, and Models for Understanding the Ocean and Climate Change.July 2021, 164
Scientific book chapters
 48 inbookI2SL: Learn How to Swarm Autonomous Quadrotors Using Iterative Imitation Supervised Learning.12981Progress in Artificial IntelligenceLecture Notes in Computer ScienceSpringer International PublishingSeptember 2021, 418432
Edition (books, proceedings, special issue of a journal)
 49 proceedingsF.François De VieillevilleS.Stéphane MayA.Adrien LagrangeA.A DupuisR.Rosa RuilobaF.Fred Ngolè MboulaT.Tristan BitardFeildelE.Erwan NoguesC.Corentin LarrocheJ.Johan MazelS.Stephan ClémençonR.Romain BurgotA.Alric GaurierL.Louis HulotL.Léo IsaacDogninL.Laetitia LeichtnamE.Eric TotelN.Nicolas PrigentL.Ludovic MéR.Rémi BernhardP.A.PierreAlain MoëllicJ.M.JeanMax DUTERTREK.Katarzyna KapustaV.Vincent ThouvenotO.Olivier BettanT.Tristan CharrierL.Luc BonnafouxF.P.FranciscoPierre PuigQ.Quentin LhoestT.Thomas RenaultA.Adrien BenamiraB.Benoit BonnetT.Teddy FuronP.Patrick BasB.Benjamin FarcyS.Silvia GilCasalsJ.Juliette MattioliM.Marc FiammanteM.Marc LambertR.Roman BressonJ.Johanne CohenE.Eyke HullermeierC.Christophe LabreucheM.Michele SebagT.Thomas ThebaudA.Anthony LarcherG.Gaël Le LanN.Nouredine NourR.Reda BelhajSoullamiC. L.Cédric L.R. BuronA.Alain PeresF.Frédéric BarbarescoA.Antoine d’AcremontG.Guillaume QuinA.Alexandre BaussardR.Ronan FabletL.Louis MorgeRolletF.Frederic Le RoyD.Denis Le JeuneR.Roland GautierB.Benjamin CamusE.Eric MonteuxM.Mikaël VermetA.Alex GoupilleauT.Tugdual CeillierM.C.MarieCaroline CorbineauProceedings of the Conference on Artificial Intelligence for Defence 2020.CAID 2020  Second Conference on Artificial Intelligence for DefenceRennes, FranceApril 2021
 50 bookOmniPrint: A Configurable Printed Character Synthesizer.2022
 51 bookRecent ethical challenges in social network analysis.Social Networks67Research on social networks raises formidable ethical issues that often fall outside existing regulations and guidelines. Even standard informed consent and anonymization are difficult to implement with data about personal relationships. Today, stateoftheart tools to collect, handle, and store personal data expose both researchers and participants to new risks. Political, military and corporate interests interfere with scientific priorities and practices, while legal and social ramifications of studies of personal ties and human networks come to the surface. The seven papers that form the special issue explore different aspects of ethical issues in contemporary social networks research. The special issue also includes a broad introduction by the guest editors and two invited comments.ElsevierAugust 2021, 176
Doctoral dissertations and habilitation theses
 52 thesisDeep Latent Variable Models : from properties to structures.Université ParisSaclayOctober 2021
 53 thesisVerification and validation of Machine Learning techniques.Université ParisSaclayNovember 2021
 54 thesisAutomated Deep Learning : Principles and Practice.Université ParisSaclayNovember 2021
Reports & preprints
 55 miscThe Tracking Machine Learning challenge : Throughput phase.May 2021
 56 reportCrowdworking in France and Germany.ZEWKurzexpertise Nr. 2109LeibnizZentrum für Europäische Wirtschaftsforschung (ZEW)October 2021
 57 miscBoltzmann Tuning of Generative Models.April 2021
 58 miscLearning Successor States and GoalDependent Values: A Mathematical Viewpoint.January 2021
 59 miscCosmology with cosmic web environments I. Realspace power spectra.December 2021
 60 miscLeveraging the structure of dynamical systems for datadriven modeling.December 2021
 61 reportLes Nouveaux Intermédiaires du Travail B2B: Comparer les modèles d'affaires dans l'économie numérique collaborative.27DARES  Direction de l'animation de la recherche, des études et des statistiques du Ministère du travail, de l'emploi et de l'insertionMarch 2022
 62 miscDistributionBased Invariant Deep Networks for Learning MetaFeatures.February 2021
 63 miscQuestioning causality on sex, gender and COVID19, and identifying bias in largescale datadriven analyses: the Bias Priority Recommendations and Bias Catalog for Pandemics.May 2021
 64 miscAutoDEUQ: Automated Deep Ensemble with Uncertainty Quantification.January 2022
 65 miscDISCO Verification: Division of Input Space into COnvex polytopes for neural network verification.May 2021
 66 miscdnadna: DEEP NEURAL ARCHITECTURES FOR DNA  A DEEP LEARNING FRAMEWORK FOR POPULATION GENETIC INFERENCE.November 2021
 67 miscCodabench: Flexible, EasytoUse and Reproducible Benchmarking Platform.October 2021
 68 reportFrancoGerman position paper on "Speeding up industrial AI and trustworthiness".Secrétariat général pour l'investissementMay 2021
Other scientific publications
 69 miscComplementary Deep  Reduced Order Model.Paris, FranceJune 2021
 70 thesisDécouverte de Politiques Interprétables pour l'Apprentissage par Renforcement via la Programmation Génétique.Université Paris DauphinePSLSeptember 2021
12.3 Cited publications
 71 inproceedingsHow Machine Learning won the Higgs Boson Challenge.Proc. European Symposium on ANN, CI and ML2016
 72 articleStatistical Mechanics of Neural Networks Near Saturation.Annals of Physics1731987, 3067
 73 articleThe Tracking Machine Learning challenge : Throughput phase.Computing and Software for Big Science2021
 74 miscThe End of Theory: The Data Deluge Makes the Scientific Method Obsolete.2008, URL: https://www.wired.com/2008/06/pbtheory/
 75 unpublishedGlasses and aging: A Statistical Mechanics Perspective.September 2020, 50 pages, 24 figs. This is an updated version of a chapter initially written in 2009 for the Encyclopedia of Complexity and Systems Science (Springer)
 76 bookGenetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and Its Applications.San Francisco, CA, USAMorgan Kaufmann Publishers Inc.1998
 77 inproceedingsPer instance algorithm configuration of CMAES with limited budget.Proc. ACMGECCO2017, 681688
 78 phdthesisPer Instance Algorithm Configuration for Continuous Black Box Optimization.Université ParisSaclayNovember 2017
 79 inproceedingsNeural Optimizer Search with Reinforcement Learning.34th ICML2017, 459468
 80 articleA theory of learning from different domains.Machine Learning7912018, 151175
 81 inproceedingsFrom abstract items to latent spaces to observed data and back: Compositional Variational AutoEncoder.ECML PKDD 2019  European Conference on Machine learning and knowledge discovery in databasesWürzburg, GermanySeptember 2019
 82 inproceedingsAlgorithms for HyperParameter Optimization.NIPS 252011, 2546–2554
 83 inproceedingsStochastic Deep Networks.Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 915 June 2019, Long Beach, California, USA97Proceedings of Machine Learning ResearchPMLR2019, 15561565URL: http://proceedings.mlr.press/v97/debie19a.html
 84 articleTReX: a graphbased filament detection method.Astronomy and Astrophysics  A&A637May 2020, A18
 85 phdthesisCosmic web environments : identification, characterisation, and quantification of cosmological information.Université ParisSaclaySeptember 2021
 86 inproceedingsNeural Representation and Learning of Hierarchical 2additive Choquet Integrals.IJCAIPRICAI20  TwentyNinth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial IntelligenceYokohama, FranceJuly 2020, 19841991
 87 articleInvariant Scattering Convolution Networks.IEEE Trans. Pattern Anal. Mach. Intell.3582013, 18721886
 88 inproceedingsWhen deep learning meets ergodic theory.73rd Annual APS/DFD MeetingChicago / Virtual, United StatesNovember 2020
 89 phdthesisMachine Learning in Space Weather.Université of EindhovenNovember 2019
 90 bookRandom matrix methods for wireless communications.Cambridge University Press2011
 91 miscOn MultiCause Causal Inference with Unobserved Confounding: Counterexamples, Impossibility, and Alternatives.2019
 92 inproceedingsMedical TimeSeries Data Generation using Generative Adversarial Networks.AIME 2020  International Conference on Artificial Intelligence in MedicineMinneapolis, United StatesAugust 2020, 382391
 93 articleSpectral dynamics of learning in restricted Boltzmann machines.EPL (Europhysics Letters)11962017, 60001
 94 articleThermodynamics of Restricted Boltzmann Machines and Related Learning Dynamics.J. Stat. Phys.1722018, 15761608
 95 inproceedingsDensity estimation using Real NVP.Int. Conf. on Learning Representations (ICLR)2017
 96 phdthesisDeep learning methods for predicting flows in power grids : novel architectures and algorithms.Université Paris Saclay (COmUE)February 2019
 97 inproceedingsDeep Statistical Solvers.NeurIPS 2020  34th Conference on Neural Information Processing SystemsVancouver / Virtuel, CanadaDecember 2020
 98 inproceedingsAgnostic feature selection.ECML PKDD 2019  European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in DatabasesWürzburg, GermanySeptember 2019
 99 bookMachine Learning Control  Taming Nonlinear Dynamics and Turbulence.Springer International Publishing2017
 100 inproceedingsConvolutional Networks on Graphs for Learning Molecular Fingerprints.NIPS2015, 22242232
 101 articleCatching up faster by switching sooner: a predictive approach to adaptive estimation with an application to the AICBIC dilemma: Catching up Faster.J. Royal Statistical Society: B7432012, 361417
 102 inproceedingsDesign of an Explainable Machine Learning Challenge for Video Interviews.IJCNN 2017  30th International Joint Conference on Neural NetworksNeural Networks (IJCNN), 2017 International Joint Conference onAnchorage, AK, United StatesIEEE2017, 18URL: https://hal.inria.fr/hal01668386
 103 articleExplaining First Impressions: Modeling, Recognizing, and Explaining Apparent Personality from Videos.IEEE Transactions on Affective ComputingAccepted at Transaction on Affective Computing2020
 104 incollectionAutoML @ NeurIPS 2018 challenge: Design and Results.The NeurIPS '18 CompetitionPreprint submitted to NeurIPS2018 Volume of Springer Series on Challenges in Machine LearningSpringer VerlagMarch 2020, 209229
 105 inproceedingsChaLearn looking at people: A review of events and resources.2017 International Joint Conference on Neural Networks (IJCNN)2017, 15941601
 106 articleGuest Editorial: Image and Video Inpainting and Denoising.IEEE Transactions on Pattern Analysis and Machine Intelligence425May 2020, 10211024
 107 incollectionEfficient and Robust Automated Machine Learning.NIPS 282015, 29622970
 108 articleCycleBased Cluster Variational Method for Direct and Inverse Inference.Journal of Statistical Physics16432016, 531574
 109 articleScaling analysis of affinity propagation.Physical Review E8162010, 066102
 110 articleDomainAdversarial Training of Neural Networks.Journal of Machine Learning Research17592016, 135
 111 inproceedingsPolitical Discourse on Social Media: Echo Chambers, Gatekeepers, and the Price of Bipartisanship.WWWACM2018, 913922
 112 articleComputational social science: Making the links.Nature  News48874122012, 448450
 113 inproceedingsCAMUS: A Framework to Build Formal Specifications for Deep Perception Systems Using Simulators.ECAI 2020  24th European Conference on Artificial IntelligenceSantiago de Compostela, SpainJune 2020
 114 phdthesisColdstart recommendation : from Algorithm Portfolios to Job Applicant Matching.Université ParisSaclayMay 2018
 115 inproceedingsASAP.V2 and ASAP.V3: Sequential optimization of an Algorithm Selector and a Scheduler.Open Algorithm Selection Challenge 2017 79Proceedings of Machine Learning Research2017, 811
 116 incollectionGenerative Adversarial Nets.NIPS 27Curran Associates, Inc.2014, 26722680
 117 bookThe Minimum Description Length Principle.MIT Press2007
 118 inproceedingsControl by Deep Reinforcement Learning of a separated flow.73rd Annual Meeting of the APS Division of Fluid DynamicsChicago, United StatesNovember 2020
 119 inproceedingsDesign and Analysis of the Causation and Prediction Challenge.WCCI Causation and Prediction ChallengeJMLR WCP2008, 133
 120 inproceedingsDesign of the 2015 ChaLearn AutoML challenge.Proc. IJCNNIEEE2015, 18
 121 incollectionAnalysis of the AutoML Challenge series 20152018.AutoML: Methods, Systems, ChallengesThe Springer Series on Challenges in Machine LearningSpringer Verlag2018
 122 articleProgramming by Optimization.Commun. ACM5522012, 7080
 123 bookHistory of Psychology.McGrawHill2004
 124 miscICU Bed Availability Monitoring and analysis in the Grand Est region of France during the COVID19 epidemic.May 2020
 125 miscDiscussion of "The Blessings of Multiple Causes" by Wang and Blei.2019
 126 bookCausal Inference for Statistics, Social, and Biomedical Sciences: An Introduction.Cambridge University Press2015
 127 miscConditions objectives de travail et ressenti des individus : le rôle du management.Synthèse n. 14 de ''La Fabrique de l'Industrie''2017, 12URL: https://hal.inria.fr/hal01742592
 128 inproceedingsAutoencoding variational Bayes..Int. Conf. on Learning Representations (ICLR)2014
 129 incollectionAlgorithm Selection for Combinatorial Search Problems: A Survey.Data Mining and Constraint Programming: Foundations of a CrossDisciplinary ApproachChamSpringer Verlag2016, 149190
 130 inproceedingsImageNet Classification with Deep Convolutional Neural Networks.Proceedings of the 25th International Conference on Neural Information Processing Systems  Volume 1NIPS'122012, 10971105
 131 articleInterdisciplinary Research in Artificial Intelligence: Challenges and Opportunities.Frontiers in Big Data3November 2020
 132 articleAttractive versus truncated repulsive supercooled liquids: The dynamics is encoded in the pair correlation function.Physical Review E1011January 2020
 133 articleLife in the network: the coming age of computational social science.Science32359152009, 721–723
 134 miscDeep Learning est mort. Vive Differentiable Programming!2018, URL: https://www.facebook.com/yann.lecun/posts/10155003011462143
 135 inproceedingsEvolutionary architecture search for deep multitask networks.Proc. ACMGECCOACM2018, 466473URL: http://doi.acm.org/10.1145/3205455.3205489
 136 articleFault Heterogeneity and the Connection between Aftershocks and Afterslip.Bulletin of the Seismological Society of America1093April 2019, 11561163
 137 articleAutomatic design and manufacture of robotic lifeforms.Nature Letters4062000, 974978
 138 inproceedingsAutoDL Challenge Design and Beta TestsTowards automatic deep learning.CiML workshop @ NIPS2018Montreal, CanadaDecember 2018
 139 inproceedingsAsymptotic Analysis of Metalearning as a Recommendation Problem.AAAI Workshop on MetaLearning and MetaDLVirtual, CanadaFebruary 2021
 140 articleTowards Automated Computer Vision: Analysis of the AutoCV Challenges 2019.Pattern Recognition Letters135July 2020, 196203
 141 inproceedingsTowards Automated Deep Learning: Analysis of the AutoDL challenge series 2019.NeurIPS 2019  Thirtythird Conference on Neural Information Processing Systems, Competition and Demonstration Track123Vancouver / Virtuel, United StatesPMLRDecember 2020, 242252
 142 inproceedingsRelaxed Quantization for Discretized Neural Networks.ICLR2019
 143 articleAlors: An algorithm recommender system.Artificial Intelligence244Published online Dec. 20162017, 291314
 144 articleLearning to run a power network challenge for training topology controllers.Electric Power Systems Research189December 2020, 106635
 145 inproceedingsLearning To Run a Power Network Competition.CiML Workshop, NeurIPSMontréal, CanadaDecember 2018
 146 inproceedingsWhich Training Methods for GANs do actually Converge?35th ICMLInternational Conference on Machine Learning2018, 34813490
 147 articleVariational Dropout Sparsifies Deep Neural Networks.ArXiv eprintsJanuary 2017
 148 bookWeapons of Math Destruction. Crown Books2016
 149 miscComment on "Blessings of Multiple Causes".2019
 150 articleWaveNet: A Generative Model for Raw Audio.CoRRabs/1609.034992016, URL: http://arxiv.org/abs/1609.03499
 151 inproceedingsDesign and Analysis of Experiments: A Challenge Approach in Teaching.NeurIPS 2019  33th Annual Conference on Neural Information Processing SystemsVancouver, CanadaDecember 2019
 152 bookCausality: Models, Reasoning, and Inference (2nd edition).Cambridge University Press2009
 153 inproceedingsTheoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution.11th ACM WSDM2018, 3
 154 articleThe influence of the brittleductile transition zone on aftershock and foreshock occurrence.Nature Communications111June 2020
 155 inproceedingsOn Fairness and Calibration.NIPS2017, 56845693
 156 articleDeep Hidden Physics Models: Deep Learning of Nonlinear Partial Differential Equations.JMLR192018, 124
 157 inproceedingsAutomated Machine Learning with MonteCarlo Tree Search.IJCAI19  28th International Joint Conference on Artificial IntelligenceMacau, ChinaInternational Joint Conferences on Artificial Intelligence OrganizationAugust 2019, 32963303
 158 inproceedingsOverview of the Multimedia Information Processing for Personality and Social Networks Analysis Contest.International Conference on Pattern Recognition (ICPR)IEEE2018, 127139
 159 inproceedingsLargeScale Evolution of Image Classifiers.34th ICML2017
 160 articleThe Algorithm Selection Problem.Advances in Computers151976, 65  118URL: http://www.sciencedirect.com/science/article/pii/S0065245808605203
 161 bookInformation and Complexity in Statistical Modeling.Information Science and StatisticsSpringerVerlag2007
 162 articleDeep learning for population size history inference: Design, comparison and combination with approximate Bayesian computation.Molecular Ecology Resources2020
 163 articleDistilling freeform natural laws from experimental data.Science32459232009, 8185
 164 inproceedingsClosedloop optimal control for shear flows using reinforcement learning.73rd Annual APS/DFD MeetingChicago, United StatesNovember 2020
 165 articleAn EndtoEnd Neural Network for Polyphonic Piano Music Transcription.IEEE/ACM Trans. Audio, Speech & Language Processing2452016, 927939
 166 articleDGM: A deep learning algorithm for solving partial differential equations.Journal of Computational Physics3752018, 1339  1364
 167 bookAnother Science Is Possible.Open Humanities Press2013
 168 inproceedingsMatchbox: large scale online bayesian recommendations.WWWMadrid, SpainACM Press2010, 111
 169 inproceedingsLessons learned from the AutoML challenge.Conférence sur l'Apprentissage Automatique 2018Rouen, FranceJune 2018
 170 bookNudge: Improving Decisions about Health, Wealth, and Happiness.Yale University Press2008
 171 articleIntriguing properties of neural networks.CoRRabs/1312.61992013, URL: http://arxiv.org/abs/1312.6199
 172 inproceedingsUnbiased online recurrent optimization.International Conference On Learning RepresentationVancouver, CanadaApril 2018
 173 articleOpenML: networked science in machine learning.SIGKDD Explorations1522013, 4960URL: https://arxiv.org/abs/1407.7722
 174 inproceedingsResults and Analysis of ChaLearn LAP Multimodal Isolated and Continuous Gesture Recognition, and Real versus Fake Expressed Emotions Challenges.International Conference on Computer Vision  ICCV 20172017, URL: https://hal.inria.fr/hal01677974
 175 miscThe Blessings of Multiple Causes: A Reply to Ogburn et al. (2019).2019
 176 miscIntelligence per Kilowatthour.2018, URL: https://icml.cc/Conferences/2018/Schedule?showEvent=1866
 177 articleThe contexttree weighting method: basic properties.IEEE Transactions on Information Theory4131995, 653664
 178 inproceedingsSynthesizing Quality Open Data Assets from Private Health Research Studies.BIS 2020  International Conference on Business Information SystemsColorado Springs, United StatesSpringer VerlagJune 2020, 324335
 179 articleGeneration and Evaluation of Privacy Preserving Synthetic Health Data.Neurocomputing416November 2020, 244255
 180 articleData Stream Clustering with Affinity Propagation.IEEE Transactions on Knowledge and Data Engineering2672014, 1
 181 articleHumanlevel control through deep reinforcement learning.Nature51875402015, 529533URL: https://doi.org/10.1038/nature14236
 182 articleMastering Chess and Shogi by SelfPlay with a General Reinforcement Learning Algorithm.CoRRabs/1712.018152017, URL: http://arxiv.org/abs/1712.01815