EN FR
EN FR

2023Activity reportProject-TeamPLATON

RNSR: 202023682J
  • Research center Inria Saclay Centre at Institut Polytechnique de Paris
  • In partnership with:CNRS
  • Team name: Uncertainty Quantification in Scientific Computing and Engineering
  • In collaboration with:Centre de Mathématiques Appliquées (CMAP)
  • Domain:Applied Mathematics, Computation and Simulation
  • Theme:Numerical schemes and simulations

Keywords

Computer Science and Digital Science

  • A3.4.1. Supervised learning
  • A3.4.2. Unsupervised learning
  • A3.4.5. Bayesian methods
  • A3.4.7. Kernel methods
  • A6. Modeling, simulation and control
  • A6.1. Methods in mathematical modeling
  • A6.1.1. Continuous Modeling (PDE, ODE)
  • A6.1.2. Stochastic Modeling
  • A6.1.5. Multiphysics modeling
  • A6.2. Scientific computing, Numerical Analysis & Optimization
  • A6.2.1. Numerical analysis of PDE and ODE
  • A6.2.4. Statistical methods
  • A6.2.6. Optimization
  • A6.2.7. High performance computing
  • A6.3. Computation-data interaction
  • A6.3.1. Inverse problems
  • A6.3.2. Data assimilation
  • A6.3.3. Data processing
  • A6.3.4. Model reduction
  • A6.3.5. Uncertainty Quantification
  • A6.5.2. Fluid mechanics

Other Research Topics and Application Domains

  • B3. Environment and planet
  • B3.3. Geosciences
  • B4. Energy
  • B4.3. Renewable energy production
  • B5.2.1. Road vehicles
  • B5.2.3. Aviation
  • B5.2.4. Aerospace
  • B5.5. Materials

1 Team members, visitors, external collaborators

Research Scientists

  • Pietro Marco Congedo [Team leader, INRIA, Senior Researcher, HDR]
  • Olivier Le Maitre [CNRS, Senior Researcher, HDR]

PhD Students

  • Meryem Benmahdi [DASSAULT SYSTEMES, CIFRE]
  • Michele Capriati [INSTITUT VKI, until Nov 2023]
  • Hugo Dornier [ONERA, from Mar 2023]
  • Marius Duvillard [CEA]
  • Sanae Janati Idrissi [CEA]
  • Zachary Jones [INRIA]
  • Omar Kahol [ECOLE POLY PALAISEAU, from Sep 2023]
  • Hugo Masson [UNIV GUSTAVE EIFFEL, from Oct 2023]
  • Hugo Nicolas [INRIA, from Nov 2023]
  • Christos Papagiannis [CNRS]
  • Malo Pocheau [BañulsDesign, until Nov 2023]

Technical Staff

  • Hanane Khatouri [INRIA, Engineer, from Aug 2023]

Interns and Apprentices

  • Raphael Alves Hailer [INRIA, Intern, from May 2023 until Aug 2023]
  • Lin Xi Li [INRIA, Intern, from Sep 2023]

Administrative Assistant

  • Anna Dib [INRIA]

Visiting Scientist

  • Marylou Gabrie [ECOLE POLY PALAISEAU, from Mar 2023]

External Collaborators

  • Giulio Gori [ECOLE POLYT. MILAN, until Jun 2023]
  • Weiqiang Liu [UNIV MCGILL, from Oct 2023]

2 Overall objectives

Computational approaches in science and engineering rely on numerical tools to produce effective, robust, and high fidelity predictions through the simulation of complex physical systems. The design and development of simulation tools encompass numerous aspects, ranging from the initial mathematical formulation of the problem to its actual numerical resolution, including the design of numerical algorithms suited to computational architectures of modern supercomputers, in particular massively parallel machines.

To fully achieve the promises of numerical simulations in sciences and engineering, it is essential to assess and improve their predictive capabilities continuously. Obvious improvements concern the modeling aspects (higher fidelity) and numerical efficiency (to enable higher resolution). However, as the computational capabilities are progressing, it is becoming more and more evident that accounting for the various uncertainties involved in the simulation process is critical. The reason is that the accurate simulation of a complex system has a practical utility if, and only if, one can prescribe with sufficient precision the system investigated. In other words, obtaining high fidelity predictions on a system different from the one targeted present limited interest. The problem here is that, except for purely academic situations, specifying precisely all the properties and forcing applied to a complex system is impossible. Whether the precise definition of the system is impossible because of inherent variabilities, lack of knowledge, or imprecise calibration procedures (experimental setups and measurements are inherently inexact), reducing totally uncertainty sources is not an option. As a result, the simulation should account for these uncertainties and quantify their impact on the predictions (similarly to the experimental error characterization) in order to assess objectively the truthfulness of the simulation and enable fully informed decision making. As a matter of fact, reliable numerical predictions require both sophisticated physical models and the systematic and comprehensive treatment of inherent uncertainties, including the calibration and validation procedures. Coarsely, the prediction errors result from physical simplifications in the mathematical model, numerical errors incurring from the discretization and numerical methods (solvers), and uncertainties in the definition of the model to be solved (input uncertainties).

Uncertainty management procedures are often tailored to the particular problem and application considered. In our experiences, it is hard to conceive a systematic a priori approach suitable for all problems. Most often, the UQ analysis consists in the gradual (re)definition and extension of its objectives , which can be somewhat vague initially. It is, therefore, crucial to have a large portfolio of diverse numerical methods to quickly propose and apply suitable treatments in response to the evolving understanding and needs as they emerge during the analysis.

The global objective of the research proposed within Platon is to develop advanced numerical methods and practices in simulations, integrating as much as possible the uncertainty management. Here, uncertainty management encompasses multiple uncertainty tasks: a) uncertainty characterization (the construction and identification of uncertainty models), b) uncertainty propagation (computation of the model-based prediction uncertainty), c) uncertainty reduction (by inference, data assimilation, conception of new experiments either physical or numerical,...) and d) uncertainty treatment in decision-making processes (sensitivity analysis, risk management, robust optimization,...). Note that one should not perceive these different uncertainty tasks as reflecting an ordered sequence of analysis steps. On the contrary, our vision and experience value a strong interaction between all these tasks which, ideally, must be visited in an order commanded by the initial information, the progress of the analysis and the resources available.

Progressing on all these tasks constitutes a significant challenge as the tasks involve a diversity of thematics and skills. This difficulty is prominent in the context of large scale simulations, where practitioners and researchers tend to be highly specialized in specific aspects (modeling, numerical schemes, parallel computing,...). Further, more massive simulations are often confused with better predictions and they overshadow the importance of uncertainties. At the same time, high simulation costs usually prevent applying straightforward uncertainty analyses as, for a fixed budget, one often prefers a simulation at the highest affordable resolution, rather than performing uncertainty analysis involving possibly less resolved simulations. However, this preference is most often not based on an objective assessment of the situation. In contrast, we believe that using complex models and exploiting fairly the predictions of large scale simulations need suitable uncertainty management procedures. Further, we are convinced of the importance of a research effort encompassing as much as possible all uncertainty tasks, to ensure the coherence and mutual relevance of the methods developed. Such an effort focusing on uncertainty management, rather than on a particular application, will be critical to improving the predictive capabilities of simulation tools and address industrial and societal needs.

Therefore, the main objectives of the team will be :

  • Propose new methods and approaches for uncertainty management.
  • Develop these methods into numerical tools applicable to large scale simulations.
  • Apply and demonstrate the impact of uncertainty management in real applications with industrial and academic partners.

To achieve these objectives, we rely on the expertise and past researches of the permanent members, which cover most of the uncertainty tasks (propagation, inference, reduction, optimization,...), although not in a comprehensive way so far. The development of new predictive simulation tools also relies on collaborations, mainly within the international academic network that we have established over the past 15 years and within the Centre de Mathématiques Appliquées de l'École Polytechnique. The development of useful uncertainty management frameworks applicable to large scale simulations demand constant interactions with end-users (engineers, practitioners, researchers); we rely on our current network of industrial partners and EPICs1 and extend it progressively.

3 Research program

The Team approach to research will be bottom-up: starting from new ideas and concepts to address both existing (known) and emerging (anticipated or not) problems. The later point, concerning the emerging problems, is particularly important in a quickly evolving research area with the constant improvement of the methodological and computational capabilities. The research thrust will be structured along two principal directions: a methodological axis and an applications axis.

3.1 UQ methodologies and tools

The Team will continuously work on developing original UQ representations and algorithms to deal with complex and large scale models, having high dimensional input parameters with complexes influences. We plan to organize our core research activities along different methodological UQ developments related to the challenges discussed above.

3.1.1 Surrogate modeling for UQ

Challenges. Surrogate models are crucial to enable the solution of both forward and backward UQ problems. Several alternative approaches, such as Polynomial Chaos, Gaussian Processes, and tensor format approximation, have been proposed and developed over the last decades. These approaches have been successfully applied to many different domains. Still, surrogates models for UQ management are facing many remaining limitations that require significant research works to handle large scale simulation-based studies and account for complex dependencies. These limitations concern multiple aspects, including the complexity related to the dimensionality of the input parameter, the definition of suitable basis representations, the complexity of the surrogate construction, and the control of the surrogate error.

Proposed actions. Platon will pursue long time efforts in the continuity of previous developments, such as the improvement of advanced sparse grid methods, sparsity promoting strategies and low-rank methods. Besides these generic developments, a first research axis will focus on the construction of surrogates for multi-physics problems (fluids, structures, chemistry,...) simulated by a system of coupled solvers. Classical surrogate methods consider the system of solvers as a single entity, and their construction requires the complete simulation with a high cost as a result. In contrast, we are proposing a divide to simplify strategy, using a surrogate of each constitutive solver, which reduces the input dimensionality of the local models, enables parallel construction and more flexible control of the computational effort. We will have to derive suitable error estimates of the contributions of the individual solver and procedures to decide the new computer experiments to reduce the error optimally. A second research axis on surrogate models will concern complexity reduction using transformation methods. Transformations can act on the input or output spaces of the model. In the first case, dimensionality reduction is achieved by finding low dimensional subspaces of the input space that convey most of the output variability. Platon will extend these methodologies to incorporate non-linear subspaces and alternative importance measures, in particular, to account for the surrogate's final usage (goal-oriented reduction). For the reduction of the output, we will consider generalizations of the preconditioning approach, which transforms the model output to a form admitting a much simpler surrogate and implicit enforcement of physical constraints. Here, the main challenges will be the automatic selection of the transformation among a dictionary and the design of computer experiments in this context (see below).

3.1.2 Uncertainty model, information theory and inference

Challenges. Uncertainty management in simulation can be considered in its infancy, and the control of the whole process, from the definition of the uncertainty model to the design of new simulations or experiments for uncertainty reduction, is still facing multiple challenges. Most past works on UQ have focused on forward-propagation and inverse problems when, in contrast, input uncertainty models and uncertainty reduction strategies, in general, have received much less attention.

Proposed actions. The uncertainty model directly affects the conclusion of UQ analyses (e.g., sensitivity analyses, estimation of failure probabilities, rare events). Therefore, it is crucial to propose uncertainty models that consistently and objectively integrate all available information and expert knowledge(s). Platon will explore the application of maximum entropy principle, likelihood maximization and moment matching methods, for the construction of uncertainty models in engineering problems.

For the inverse problem, the Team will continue its efforts in Bayesian inference toward better treatment of the model error in the calibration procedure.

Concerning uncertainty reduction, a central question is the prediction of the improvement toward the specific objective brought by a new simulation (computer experiment). Platon will investigate different strategies of design of experiment (DoE) based on measures of the improvement, such as entropy reduction, besides the classical reduction of variance.

The DoE in inference consists in proposing new physical experiments to reduce the posterior uncertainty optimally. Optimizing information gain leads to expensive numerical procedures, and suitable model error and noise models are critical to ensure the robustness of these optimal DoE procedures when applied to real-life data. Platon will work on approximation and reduction methods for optimal DoE to enable applications in large-scale engineering problems; the extension of the optimization to uncertainty reduction in general model-prediction, not just the model parameter uncertainty.

3.1.3 Multi-fidelity, Multi-level and optimization under uncertainty

Challenges. Multi-fidelity and Multi-level (MF&L) methods have been proposed to reduce the cost of surrogate model construction or statistics estimations, by relying on simulators of different complexity (in the modeled physics, discretization, or both). Although these methods have proved to be effective, particularly in the context of expensive simulations, existing algorithms must be adapted to other tasks. MF&L strategies are also missing in Robust Optimization (RO) and Reliability-Based Optimization (RBO), where one has to evaluate the objective accurately, typically some statistics of the model output (moments, quantiles, ...).

Proposed action. The Team Platon will explore MF&L approaches and the design of computer experiments to obtain the best estimation at the lowest cost (or for a prescribed computational budget) for nontrivial goal, specifically optimization and reliability problems where the accuracy needed is not uniform, possibly unknown a priori and to be estimated as the construction proceeds.

In RO and RBO, our research will focus on the estimation of robustness and reliability measures with tunable fidelity to adapt the convergence of the statistics to the advancement of the optimization procedure. Platon will include MF&L in the so-called bounding-box approach to track the level of error in the statistical estimates. Another research axis will focus on alternative estimation methods, e.g. the Quantile Bayesian regression, to include MF&L features.

3.1.4 HPC and UQ problems

Challenges. Both intrusive and non-intrusive UQ methods are associated to large computational costs, ranging from several to millions of times the cost of a deterministic solution depending on the problem and task considered. This situation is a significant obstacle to the deployment of UQ analysis in large scale simulations, and computational aspects have been central for a long time. However, works concerning the exploitation of High-Performance Computing platforms with massive parallelism are still scarce, besides the trivial parallelism of some sampling methods (e.g., Monte Carlo). Further, past efforts have concerned the formulation of the stochastic problem and relied on existing advanced solution methods (e.g. Domain Decomposition, linear algebra libraries, parallelism). However, few works have fully considered exploiting stochastic structures and HPC aspects to design novel computational strategies fully dedicated to UQ problems.

Proposed actions. Platon will continue to develop solvers for the resolution of multiple large systems resulting from the discretization of sampled stochastic problems. In particular, we shall focus on linear and non-linear (Newton-like) solvers, exchanging information (Krylov spaces) between successive solves to improve convergence rates of iterative methods. Besides the extension to non-linear problems, the work will focus on the implementation aspect and consider communication strategies when several instances of the random system are solved in parallel.

Platon will continue to develop specific domain decomposition methods for stochastic problems, and to propose effective stochastic preconditioners exploiting the independence of the local (uncertain) sub-problems. An additional but critical point concerns the association of adaptive mesh refinement (AMR, in space) with multiple resolution analysis (MRA, in the parameters) methods. Few works have solved UQ problems with deterministic AMR, and combining the two adaptive approaches within a parallel framework remains challenging; progress in this direction would enable efficient intrusive solvers for conservation laws.

4 Application domains

In this section, we provide some examples of UQ problems with industrial interests. We believe they are illustrative of how we envision interactions and knowledge transfer with industrial partners. These examples involve industrial and academic partnerships, with active projects and contracts.

4.1 Simulation of space objects

Challenges. The French Aerospace industry is facing enormous technological challenges in a highly competitive market. We focus on two relevant problems, i.e. the design of the booster of the Ariane6 launch vehicle and the atmospheric reentry of space vehicles or satellites. The launch vehicle's structure sustains severe mechanical and thermal stresses during the ignition stage, which are challenging to model accurately. Therefore, the design still relies heavily on experimental measurements and safety margins, when a better account of model uncertainty would help improve the design procedures. Concerning the atmospheric reentry, recent regulations impose the reentry of a human-made end-of-life space object with a rigorous assessment of the risk for human assets. The risk evaluation requires sequences of complex numerical simulations accounting for the multi-physics phenomena occurring during the reentry of a space object, e.g., fluid-structure interactions and heat transfer. Further, these simulations are inaccurate because they rely on overly simplified models (e.g., a reliable model of fragmentation is not available yet) and partial knowledge of the reentry conditions.

4.2 Predictive simulation of complex flows in nuclear reactors

Challenges. In the nuclear field, a systematic issue is that the calibration and validation of the mathematical model use experimental data measured on devices that are scaled versions of the actual design. One expects the scaled models to exhibit the same physics as the actual design, although the two operate in different conditions. Because of prohibitive computational cost, only parts of the reactor can be simulated with computational-fluid-dynamics (CFD) models. An open question is then how to accurately estimate the global prediction error associated with the resulting numerical model. The long-term objective in this field is to perform a so-called up-scaling approach, integrate simulations of different parts of the reactor and available experiments in scaled and actual designs, and improve the global predictive capability of the simulation and support the decision regarding new experiments.

4.3 Robust design of ORC turbines for renewable energy sources

Challenges. Organic Rankine Cycles (ORCs) are of key-importance in renewable energy systems. The thermodynamic properties of the organic fluids present technological advantages for low-grade heat sources, e.g. geothermal, solar, or industrial waste. The use of these systems in different physical locations worldwide and with different heat source conditions implies large variability in the turbine's operating conditions. For this reason, ORCs manufacturers are highly interested in evaluating the variability in the system efficiency and, eventually, in the robust designing of the turbines. Moreover, the molecular complexity of organic fluids requires sophisticated thermodynamic models. Nevertheless, the scarcity of experimental data makes hard the calibration of both thermodynamic models and parameters (among other critical properties, acentric factor), as well as the inference of a suitable turbulence model.

4.4 Uncertainty and inference in geosciences

Challenges. Uncertainty and inference are crucial in geosciences where all prediction is affected by lack of knowledge, imprecise calibration, and model error. It is essential to make the best use of the available information and objectively account for the actual state of knowledge. Besides, depending on the application, experimental observations can be very scarce or highly abundant, models can be crude or highly sophisticated, such that different methods are needed to adapt to the context. Further, these methods should ideally consider all sources of error (data error, calibration uncertainty, model error, numerical error) globally to balance them and ensure that resources are properly allocated to improve the prediction. For these reasons, Platon will continue to work on methodologies for applications in geosciences.

4.5 Research plan

Most of the actions proposed above are either initiated or planned to start shortly. They are organized and structured around Ph.D. and Post-Doc research activities and will not exceed the duration of the project. Apart from these actions, we will continuously conduct more exploratory research activities to improve, for instance, the treatment of (structural) model errors in uncertainty management, assess the potential application of machine learning algorithms to UQ, and advance toward holistic management of uncertainties.

5 Social and environmental responsibility

5.1 Impact of research results

Pollution reduction in commercial aircrafts

In EASA's 2019 annual report, in-flight icing was identified as a priority 1 issue for large aeroplanes. Therefore, to comply with certification rules, airframes and engine manufacturers must demonstrate safe operation under icing conditions which leads to significant costs before the new product is put into service. Wind tunnel tests and flight tests in icing conditions are usually required due to the low confidence of certification authorities place in simulations, due to the complexity of the icing process.

A breakthrough, leading to a reduction of time-to-market and certification costs, would be obtained by creating a consensus among certification authorities about the reliability of simulation tools for predicting in-flight ice accretion and the operation of IPS.

TRACES is a European Joint Doctorate network whose main goal is to provide high-level training in the field of in-flight icing to deliver a new generation of high achieving Doctoral Researchers (DR) in the diverse disciplines necessary for mastering the complexity of ice accretion and its mitigation in aircraft and aeroengines. In TRACES, Platon is developing novel methods to assess the calibration procedure by detecting potential inaccuracies of icing model and to perform an uncertainty quantification study propagating systematically the posterior distribution of each model's parameter.

Renewable energy sources

Platon is involved in the development of advanced numerical tools to simulate Organic Rankine Cycles (ORCs), which are of key-importance in renewable energy systems. Specifically, we are working on the inference of thermodynamic models parameters for complex molecular compounds, using experimental data of the worldwide first facility at Politecnico di Milano. Secondly, we are developing a robust optimization framework for the shape design of ORC turbines. We hope to apply these methodologies to real-case scenarios in collaboration with manufacturers within the H2020-MSCA-ITN NICE (submitted this year).

6 Highlights of the year

The team has created a joint laboratory with the SME Bañulsdesign within the ANR-Labcom MATritime.

7 New software, platforms, open data

7.1 New software

7.1.1 Stocholm

  • Name:
    Stocholm
  • Keyword:
    Uncertainty quantification
  • Functional Description:
    Stocholm is a numerical library permitting to respond to potential partners swiftly and draft UQ solutions addressing new questions. It includes Polynomial Chaos construction, manipulation, and algebra, adaptive sparse grid methods for integration, interpolation, and projection in high-dimension, stochastic multi-resolution analysis tools with error estimators, advanced regression methods with regularization techniques and Gaussian process modeling, sampling methods with LHS, QMC and Markov Chain Monte Carlo algorithms, Bayesian inference framework and fast density estimation methods, Bayesian optimization algorithms with robust and multi-objective strategies, ...
  • Release Contributions:
    We will continue integrating existing tools and new ones into the library StochOlm (C++), the most general one, to allow for maximum interoperability of the constitutive utilities. Having a unique library shared by the whole group also presents some interest for students and new researchers joining the Team, as they can benefit from the others’ experience.
  • Authors:
    Olivier Le Maitre, Pietro Marco Congedo
  • Contact:
    Olivier Le Maitre
  • Partners:
    CNRS, Ecole Polytechnique

8 New results

8.1 Research axis 1: Uncertainty Quantification and Inference

Participants: P.M. Congedo, O. Le Maître, M. Capriati, M. Duvillard, M. Benmahdi, S. Idrissi, O. Kahol, H. Khatouri.

Project-team positioning

Many research groups are presently working on Uncertainty Quantification (UQ) and inference problems over the world and in France. For instance, the US has created and continues to expand large multi-disciplinary groups to address UQ challenges in energy and military domains through their national laboratories (SANDIA, Oak-Ridge, LLNL,...). These groups aim at providing generic methods and tools (mostly software) for the resolution of UQ problems (for example, the Dakota code from Sandia-Albuquerque) faced by other research groups from diverse application domains. Other countries are supporting smaller initiatives, including the CEA (civil and military) in France. Several large industrial groups, such as Bosch, EADS, or EdF, are also deploying UQ methodologies and tools (for example, the OpenTurns code from EADS/EDF) through dedicated RD units or services, responding to the demands of other services. These UQ activities have often emerged in well-established groups working in specific application domains (e.g., fluid dynamics, solid mechanics, electromagnetics, chemistry, material sciences, earth sciences, life sciences, ...), in response to some UQ aspects related to these particular domains. We cite G. Iaccarino (Uncertainty Quantification Lab within the Center for Turbulence Research, Stanford University), Y. Marzouk (Aerospace Computational Design Laboratory, MIT) and K. Wilcox (Institute for Computational Engineering and Sciences, University of Texas). The situation is globally similar in applied mathematics, where several groups develop advanced UQ methods within a broader research area (e.g., stochastic numerics, statistics, numerical analysis,...), sometimes with only a distant connection to engineering domains. For example, we can mention the research groups of M. Giles (Oxford), I. Bilionis (Purdue University), J. Garnier (Ecole Polytechnique), R. Abgrall (University of Zurich).

The objective of Platon is to team-up participants with the main interest in the development of UQ methodologies. While primarily targeting our current applications, our objective is to propose new applications through collaborations and progressive team development while maintaining the UQ as the project's identity. This strategy gives a somehow unique position of the Team within the national and international research landscapes. As far as computational mechanics and engineering are concerned, no group has been created with UQ management as the principal working area.

Then, the identity of Platon is to be contrasted with initiatives, including within Inria, which may have a UQ component, but within different methodological contexts and not as a central activity. For instance, some teams (e.g. SIERRA, TAO, SELECT, MODAL) develop statistical methods for data analysis, machine learning, and the treatment of large databases. Overall, the problems targeted in Platon are usually too costly, with high parametric dimension, and with few experimental data, so existing statistical methods can not be reused "as is", and require dedicated approaches.

On the application side, there are already Inria teams working on CFD applications, some even incorporating uncertainty quantification and sensitivity analysis activities. We mention here AIRSEA, which focuses on oceanic and atmospheric flows, CARDAMOM on free-surface hydraulics, and ACUMES on unsteady models in traffic flow and biology. In contrast to our project, all these efforts primarily address challenges in their respective application areas.

Scientific achievements

Our research activity features two main axes. The first is related to methodological developments, while the second is oriented to UQ problems with industrial interests.

The first contribution concerns a computer model calibration technique considering model error. Building upon our previous work 8, we formulate a new methodology called the Complete Maximum a posteriori method. Such improvement is embodied in two key properties. First and foremost, we have removed the point mass distribution hypothesis, upon which the KOH and FMP methods rely. Some examples and calculations show that such a hypothesis is never valid. The reformulated hypothesis is more general and less stringent; for this reason, we expect the method to provide a better approximation of the parameters' posterior distribution in a wider range of cases. The second improvement is the inclusion of the Hessian in the estimated parameters' posterior. In the case where the posterior has a single mode, both KOH and FMP will correctly identify such mode but tend to underestimate the weight of the tails, leading to a false certitude effect. We have also illustrated that the correction introduced by the CMP method becomes larger as the residuals grow, showing that this can lead to an undesired waterbed effect that assigns more mass to the peaks instead of the tails. When the posterior has multiple modes, the correction allows for recovering the correct weight of such modes.

Concerning applications of Bayesian inference methodology to engineering problems, we have studied the internal resistance in the context of the thermal behavior of Li-ion battery cells 14. We propose an innovative way to deal with the uncertainties related to this physical parameter using experimental data and numerical simulation. First, a CFD model is validated against an experimental configuration representing the behavior of heated Li-ion battery cells under constant discharging current conditions. Secondly, an Uncertainty Quantification-based methodology is proposed to represent the internal resistance and inherent uncertainties. Thanks to an accurate and fast-to-compute surrogate model, the impact of those uncertainties on the temperature evolution of Li-ion cells is quantified. Finally, Bayesian inference of the internal resistance model parameters using experimental measurements is performed, reducing the prediction uncertainty by almost 95% for some temperatures of interest. Finally, an enhanced internal model is constructed by considering the state of charge and temperature dependency on internal resistance. This model is implemented in the CFD code and used to model a full discharge of the Li-ion batteries. The resulting temperature evolution computed with the two different resistance models is compared for the low state of charge situations.

Another contribution to calibration concerns the building of a neural network for uncertainty quantification and calibration of one-dimensional arterial hemodynamics 16. This task can be particularly complex when the available measurements are sparse, noisy, and indirectly related to the quantities of interest. The calibration of these models involves learning hundreds of parameters from observations and solving high-dimensional inverse problems. This study aims to explore the use of Artificial Neural Networks (ANNs) for model parameter identification tasks. Firstly, we examine ANNs as surrogates for forward mapping. When applied to a one-dimensional hemodynamics model of the human arterial system, ANNs are excellent surrogates for time-series outputs, with only a 3% prediction error. We explore gradient-based and gradient-free non-linear optimization methods and find that Tikhonov's regularization of the gradient-based optimization significantly improves parameter identification. Finally, we investigate ANNs as surrogates for the inverse mapping from observations to model parameters. We discover that the parameter identification accuracy of the inverse surrogate ANN is comparable to MLE. We also emphasize the importance of selecting informative observations for the inverse ANN surrogate. Choosing observations that have high global sensitivity to the sought-after parameters can greatly enhance prediction accuracy, especially when the training data is limited.

Collaborations

Since many years, we have several long-term partnerships with KAUST, von Karman Institute for Fluid-Dynamics (VKI), Politecnico di Milano and CEA.

With KAUST, we are working on new stochastic particle tracking methods to identify and track oil spills in open waters, combining satellite images and uncertainties in predicted currents. We also develop new assimilation schemes, inference methods for fractional diffusion models, and the selection and reduction of observations. There are several joint publications and exchanges of students.

With VKI, we work on UQ methods and inverse problems for atmospheric re-entry and ablation problems. In terms of production, there are several joint publications and one joint PhDs (M. Capriati).

With Politecnico di Milano, we have several activities in the Aeronautical and Energy fields. We work on the characterization of the thermodynamic model with Bayesian approaches, uncertainty on the turbulence model for RANS aerodynamic simulation, multi-fidelity approaches. We are currently involved in the EU TRACES project. We have also a strong collaboration with Giulio Gori, former member of Platon, and now Assistant Professor at Politecnico di Milano.

With CEA Saclay, we have a long-term collaboration since four years. Nicolas Leoni defended his thesis last year, and another student is doing her PhD (Sanae Idrissi).

External support

  • MSCA Doctoral Network TRACES Project (2022-2026)
  • Industrials contracts with CEA
  • Industrial contract with 3DS

Self assessment

In addition to developing methods-oriented research, we proposed UQ methods tailored to specific applications in collaboration with other academic and industrial partners. This action has allowed us to position ourselves with high-impact papers in many application areas.

A weakness may be finding a balance between two different axes. The first axis concerns the development of high-level research from a methodological point of view, while the second one involves collaborations with industrial partners within research contracts and European projects. We think that the team's current size does not fit very well in the long term with this double effort. For this reason, the recruitment of new forces seems mandatory to keep sustaining a good balance between these two main axes of research.

8.2 Research axis 2: Solvers, Numerical Schemes and HPC

Personnel

Participants: P.M. Congedo, O. Le Maître, M. Duvillard, H. Dornier, C. Papagiannis, G. Lin.

Project-team positioning

Research on solvers, numerical schemes, and HPC algorithms specifically dedicated to UQ problems is scarce. Indeed, advanced sampling and stochastic estimation procedures, the subject of intensive outgoing research, rely on state-of-the-art deterministic solvers to generate the solution samples. To our knowledge, there is no research group (within or outside Inria) focusing entirely on the computational aspects of UQ problems. Groups producing computational utilities for UQ (e.g., Sandia's Dakota, OpenTurns) focus on the sampling part (statistical treatment), and the efficient generation of the samples is left to the user. In recent years, few works have concerned Galerkin solvers, their preconditioning, and the adaptation of domain decomposition methods (DDM) for (usually elliptic) stochastic PDEs. We can mention some activities in Manchester (preconditioning), Munich and Lausanne (DDM), and Bath (solvers for multi-level methods). In Platon, we are trying to exploit the structure of the stochastic problems to propose new strategies for their resolution (Galerkin method) or the generation of solution samples. These strategies can consist of adapting deterministic solvers to factorize the computational effort over multiple samples or, on the contrary, the definition of entirely new solution procedures to exploit parallel methods in stochastic problems better, beyond the independent resolution of independent samples. Our objective is to produce parallel and scalable methods for large-scale stochastic problems.

It becomes more and more critical to devise solution methods tailored to the stochastic problem when the numerical complexity of the underlying deterministic problem increases. For elliptic problems, highly efficient deterministic solvers' availability has somehow limited the research on stochastic solvers. The situation is different for models based on fractional diffusion operators (in space or time), where the numerical difficulties to solve these operators have virtually prevented any work on problems with stochastic fractional and diffusion coefficients. A few years ago, KAUST (Omar Knio) and KFUPM (Kassem Mustapha) initiated a research program on fractional diffusion models. Platon is involved in this program to deal with the stochastic extensions. Several new numerical schemes and algorithms to solve deterministic fractional diffusion equations have been designed. These schemes are suitable for an extension to stochastic problems (e.g., allowing for spatially variable coefficients and achieving efficient -scalability- enabling sampling methods and inverse problems).

Scientific achievements

First, we have finalized a paper on a second-order accurate numerical scheme for a time-fractional Fokker–Planck equation 9. Here, a second-order accurate time-stepping scheme for solving a time-fractional Fokker–Planck equation of order α(0,1), with a general driving force, is investigated.

Secondly, we have finalized a paper about continuous and discrete data assimilation with noisy observations for the Rayleigh-Bénard convection 7. Continuous Data Assimilation (CDA) is a recently introduced downscaling algorithm that constructs an increasingly accurate representation of the system states by continuously nudging the large scales using coarse observations. We have introduced a Discrete Data Assimilation (DDA) algorithm as a downscaling algorithm based on CDA with discrete-in-time nudging. We then investigate the performance of the CDA and DDA algorithms for downscaling noisy observations of the Rayleigh-Bénard convection system in the chaotic regime.

Third, we are developing a Data assimilation method for particle-based simulations. Particle-based methods are simulation approaches that rely on Lagrangian representations capable of handling complex geometries with large deformations and changes in the shape of a continuum (such as fragmentation and free-surface flow). This work aims to propose new data assimilation methods adapted for particle-based simulations.

Another contribution concerns a new method for efficient CFD simulations with operating conditions variability. The method consists of building a unique adapted mesh that aims at minimizing the average error for random flow conditions. It is an iterative construction based on estimating the local mean error from a reduced set of sample conditions. The characteristics and performance of the method are investigated on a one-dimensional Burgers equation and a two-dimensional Euler Scramjet configuration.

Another contribution concerns the acceleration of numerical simulations in the context of the hypersonic planetary reentry problem, whose simulation involves coupling fluid dynamics and chemical reactions 11. Simulating chemical reactions takes most of the computational time but, on the other hand, cannot be avoided to obtain accurate predictions. We face a trade-off between cost efficiency and accuracy: the numerical scheme has to be sufficiently efficient to be used in an operational context but accurate enough to predict the phenomenon faithfully. To tackle this trade-off, we design a hybrid numerical scheme coupling a traditional fluid dynamic solver with a neural network approximating the chemical reactions. We rely on their accuracy and dimension reduction power when applied in a big data context and on their efficiency stemming from their matrix-vector structure to achieve important acceleration factors (×10 to ×18.6). This work aims to explain how we design such cost-effective hybrid numerical schemes in practice. Above all, we describe methodologies to ensure accuracy guarantees, allowing us to go beyond traditional surrogate modeling and use these schemes as references.

Tackling new machine learning problems with neural networks always means optimizing numerous hyperparameters that define their structure and strongly impact their performances. In 10, we study goal-oriented sensitivity analysis, based on the Hilbert-Schmidt Independence Criterion (HSIC), for hyperparameter analysis and optimization. Hyperparameters live in spaces that are often complex and awkward. They can be of different natures (categorical, discrete, boolean, continuous), interact, and have inter-dependencies. All this makes it non-trivial to perform classical sensitivity analysis. We alleviate these difficulties to obtain a robust analysis index that can quantify hyperparameters’ relative impact on a neural network’s final error. This valuable tool allows us to better understand hyperparameters and to make hyperparameter optimization more interpretable. We illustrate the benefits of this knowledge in the context of hyperparameter optimization and derive an HSIC-based optimization algorithm that we apply on MNIST and Cifar, classical machine learning data sets, but also on the approximation of Runge function and Bateman equations solution, of interest for scientific machine learning. This method yields competitive and cost-effective neural networks.

Collaborations

With KAUST, we worked with Omar Knio on numerical schemes for fractional diffusion equation and their extension to the stochastic case.

With CEA-CESTA, we worked on scientific machine learning techniques within the thesis of Paul Novello.

External support

  • Industrial contracts with CEA
  • Industrial contract with 3DS
  • Industrial contract with Framatome

Self assessment

Concerning the fractional diffusion models, we are already engaged in the extension of the hierarchical matrix method to solve the spatial stochastic fractional diffusion equation with a Galerkin method and to develop sparse storage strategies to reduce the complexity of the stochastic time-fractional problem. These are promising and very original researches. We (Platon) are dependent on the collaboration to access some of the numerical utilities (H-matrices).

8.3 Research axis 3: Optimization under uncertainty

Personnel

Participants: P.M. Congedo, O. Le Maître, M. Pocheau, Z. Jones, H. Nicolas.

Project-team positioning

Optimization Under Uncertainty is an important axis of research, due to both the evergrowing computational power available and the need for efficiency, reliability and cost optimality. The presence of uncertainty could make the solution of a deterministic optimization problem suboptimal or even infeasible. Since this behavior could impact strongly the design performances, both academia and industry focused their effort to developing optimization under uncertanty methodologies. Optimization under uncertainty is a broad domain including several modeling paradigms, such as for example stochastic programming, Reliability-Based Design Optimization (RBDO, that deals with probabilistic and worst-case feasibility constraints), and Robust Design Optimization (RDO, where the deterministic objectives are replaced with averaged or worst-case ones, possibly in a multi-objective context such as the classical Taguchi optimization).

Note that most of the groups active in optimization under uncertainty also have strong activities in uncertainty quantification. Thus there is an overlap with the state of the art presented in Section 8.1. The Optimization & Uncertainty Quantification Group of Sandia-Albuquerque aim at providing advanced methods for the resolution of optimization under uncertainty problems. We mention as well optimization under uncertainties activities emerged in well-established groups working in specific application domains. We cite the Aerospace Computational Design Laboratory from MIT and the Institute for Computational Engineering and Sciences from University of Texas. In France, we can mention the OQUAIDO Chair ( Optimization and QUAntification of Uncertainties), hosted by the École des Mines de Saint-Étienne from 2016 to 2021, aiming to bring together academic and industrial partners to solve problems related to uncertainty quantification, inversion and optimization.

In the context of the Optimization under Uncertainty, Platon is devoted to developing novel methods to tackle constrained multi-objective optimization, with specific attention on cost-efficient and mainly derivative-free strategies. Specifically, we look for an optimal trade-off between computational cost and accuracy in the case of problems involving complex and expensive numerical solvers. Platon is exploring also dedicated representation and the design of computer experiments to obtain the best estimation at the lowest cost (or for a prescribed computational budget) for nontrivial goal, specifically optimization and reliability problems where the accuracy needed is not uniform, possibly unknown a priori and to be estimated as the construction proceeds. More recently, we have worked also on sample average approximation methods using a risk-averse stochastic programming formulations.

Several Inria Teams have the optimization problem as core activity, such as for example BONUS, EDGE, INOCS, POLARIS, RANDOPT. Main difference is that we are not interested in working on generic optimization algorithms, as mentioned before. In our past and current works, we use standard optimization algorithms, mainly for continuous optimization. We focus our attention on dedicated representations to efficiently estimate uncertainty-based metrics within an optimization problem. The Inria teams POLARIS and INOCS work on innovative methods for stochastic optimization that are quite different from those proposed by Platon.

Scientific achievements

The first achievement concerns a novel methodology for non-parametric estimations of robustness and reliability measures approximation error, employed in the context of constrained multi-objective optimisation under uncertainty 13. These approximations with tunable accuracy permit the capture of the Pareto front in a parsimonious way and can be exploited within an adaptive refinement strategy. First, we illustrate an efficient approach for obtaining joint representations of the robustness and reliability measures, allowing sharper discrimination of Pareto-optimal designs. A specific surrogate model of these objectives and constraints is then proposed to accelerate the optimisation process. Secondly, we propose an adaptive refinement strategy, using these tunable accuracy approximations to drive the computational effort towards the computation of the optimal area. To this extent, an adapted Pareto dominance rule and Pareto optimal probability computation are formulated. The performance of the proposed strategy is assessed on several analytical test cases against classical approaches. We also illustrate the method on an engineering application, performing shape optimisation under uncertainty of an Organic Rankine Cycle turbine.

The second achievement concerns an improvement of the stochastic multi-gradient algorithm (SMGDA), allowing for the minimization of the expected objectives without having to directly calculate them. However, a bias in the algorithm and the inherent noise in stochastic gradients cause the algorithm to converge to only a subset of the whole Pareto front, limiting its use. We reduce the bias of the stochastic multi-gradient calculation using an exponential smoothing technique and promote the exploration of the Pareto front by adding non-vanishing noise tangential to the front 17. We prove that this algorithm, Transverse Brownian Motion, generates samples in a concentrated set containing the whole Pareto front. Finally, we estimate the set of Pareto optimal design points using only the sequence generated during optimization while also providing bootstrapped confidence intervals using a nearest-neighbor model calibrated with a novel procedure based on the hypervolume metric. Our proposed method allows for the estimation of the whole of the Pareto front using significantly fewer evaluations of the random quantities of interest when compared to a direct sample-based estimation, which is valuable in the context of costly model evaluations. We illustrate the efficacy of our approach with numerical examples in increasing dimensions and discuss how to apply the method to more complex problems.

A third achievement concerns an Active Learning Strategy for joint surrogate model construction with compatibility conditions and application to VPP (Prediction Programs) of sailing yachts’ design. Predicting the maximal steady velocity of a yacht involves resolving constrained optimization problems. These problems have a prohibitive computational cost when using high-fidelity global modeling of the yacht. This difficulty has motivated the introduction of modular approaches, decomposing the global model into subsystems modeled independently and approximated by surrogate models (response surfaces). The maximum boat speed for prescribed conditions solves an optimization problem for the trimming parameters of the model constrained by compatibility conditions between the subsystems’ surrogate solution (e.g., the yacht equilibrium). The accuracy of the surrogates is then critical for the quality of the resulting VPP. We worked with Gaussian Process (GP) models of the subsystems and introduced an original sequential Active Learning Method (ALM) for their joint construction 12. Our ALM exploits the probabilistic nature of the GP models to decide the enrichment of the training sets using an infilling criterion that combines the predictive uncertainty of the surrogate models and the likelihood of equilibrium at every input point. The resulting strategy enables the concentration of the computational effort around the manifolds where equilibrium is satisfied. The results presented compare ALM with a standard (uninformed) Quasi-Monte Carlo method, which samples the input space of the subsystems uniformly. ALM surrogates have higher accuracy in the equilibrium regions for equal construction cost, with improved mean prediction and reduced prediction uncertainty. We further investigate the effect of the prediction uncertainty on the numerical VPP and in a routing problem.

The fourth achievement concerns the robust optimization of a Thermal Anti-Ice Protection System in uncertain cloud conditions 6. The considered uncertainty regards a lack of knowledge concerning the characteristics of the cloud, i.e., the liquid water content and the median volume diameter of water droplets, and the accuracy of measuring devices, i.e., the static temperature probe. Uncertain parameters are modeled as random variables, and two sets of bounds are investigated. A forward uncertainty propagation analysis is carried out using a Monte Carlo approach exploiting surrogate models. The optimization framework relies on a gradient-free algorithm (mesh adaptive direct search), and two different objective functions are considered, namely, the 95 quantile of the freezing mass rate and the statistical frequency of the fully evaporative operating regime. The framework is applied to a reference test case, revealing the potential to improve the heat flux distribution of the baseline design. A new heat flux distribution is proposed, and it presents a more efficient use of thermal power, increasing flight safety even in nonnominal environmental conditions.

Collaborations

We worked with DLR (German Aerospace Center) within the EU NEXTAIR project for robust optimization, and the thesis of Zachary Jones.

Several collaborations are with Politecnico di Milano within the TRACES project.

The collaboration with KAUST, and in particular Ricardo Lima, has brought specific stochastic optimization problems with structures that considerably differ from our other researches (e.g. two-stages optimization, introduction of recourse, discrete optimization, ...). These problems also involve different risk mitigation approaches. Working on these problems we have learn alternative formulations and uncertainty treatments that we plan to apply to engineering applications. Similarly, we have contributed with sampling and uncertainty modelling strategies that are original for this types of problems.

External support

  • MSCA Doctoral Network TRACES Project (2022-2026)
  • EU NEXTAIR Project (2022-2026)
  • Industrial contract with Bañulsdesign

Self assessment

Concerning the strong point, we proposed advanced state-of-the-art methods in different aspects of optimization under uncertainty, which are topics of great interest in academia. At the same time, we consolidated industrial collaborations that have allowed us to develop high-impact projects with a relevant societal impact.

Concerning a potential weakness, we think it is particularly challenging, given the size of the team, to keep proposing innovative methods and, at the same time, to contribute to projects at the industrial and European scale. New recruitments seem necessary to ensure this twofold effort.

9 Bilateral contracts and grants with industry

Participants: P.M. Congedo, O. Le Maitre, M. Pocheau, M. Benmahdi, S. Idrissi, H. Khatouri.

9.1 Bilateral contracts with industry

9.1.1 Bañulsdesign

Since 2019, the team benefits from a "contrat d'accompagnement" for the Cifre thesis of Malo Pocheau, on the modelling of foilers.

9.1.2 3DS

Since 2022, the team benefits from a "contrat d'accompagnement" for the thesis of Meryem Benmahdi.

9.1.3 CEA

Since 2022, the team benefits from a "contrat d'accompagnement" for the thesis of Marius Duvillard, and from a "contrat d'accompagnement" for the thesis of Sanae Idrissi.

9.1.4 Framatome

Since 2023, the team benefits from a "research contrat" in the context of the Pré-Defi project with Framatome.

10 Partnerships and cooperations

Participants: P.M. Congedo, O. Le Maître, Z. Jones, O. Kahol, H. Nicolas.

10.1 International research visitors

10.1.1 Visits of international scientists

Other international visits to the team
Guglielmo Scovazzi
  • Status
    Professor
  • Institution of origin:
    Duke University
  • Country:
    USA
  • Dates:
    January 2023
  • Context of the visit:
    Research Collaboration
  • Mobility program/type of mobility:
    Research stay
Maria Han Veiga
  • Status
    Assistant Professor
  • Institution of origin:
    Ohio State University
  • Country:
    USA
  • Dates:
    March 2023
  • Context of the visit:
    Research Collaboration
  • Mobility program/type of mobility:
    Research stay
Giulio Gori
  • Status
    Assistant Professor
  • Institution of origin:
    Politecnico di Milano
  • Country:
    Italy
  • Dates:
    April 2023
  • Context of the visit:
    Research Collaboration
  • Mobility program/type of mobility:
    Research stay
Alberto Guardone
  • Status
    Professor
  • Institution of origin:
    Politecnico di Milano
  • Country:
    Italy
  • Dates:
    February and November 2023
  • Context of the visit:
    Research Collaboration
  • Mobility program/type of mobility:
    Research stay
Omar Knio
  • Status
    Professor
  • Institution of origin:
    KAUST
  • Country:
    SAUDI ARABIA
  • Dates:
    June 2023
  • Context of the visit:
    Research Collaboration
  • Mobility program/type of mobility:
    Research stay

10.2 European initiatives

10.2.1 Horizon Europe

NEXTAIR

NEXTAIR project on cordis.europa.eu

  • Title:
    NEXTAIR - multi-disciplinary digital - enablers for NEXT-generation AIRcraft design and operations
  • Duration:
    From September 1, 2022 to August 31, 2025
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • THE UNIVERSITY OF SHEFFIELD (USFD), United Kingdom
    • IMPERIAL COLLEGE OF SCIENCE TECHNOLOGY AND MEDICINE, United Kingdom
    • AIRBUS OPERATIONS SAS (AIRBUS OPERATIONS), France
    • ETHNICON METSOVION POLYTECHNION (NATIONAL TECHNICAL UNIVERSITY OF ATHENS - NTUA), Greece
    • SAFRAN SA, France
    • UNIVERSITA DEGLI STUDI DI CAGLIARI (UNICA), Italy
    • OFFICE NATIONAL D'ETUDES ET DE RECHERCHES AEROSPATIALES (ONERA), France
    • DEUTSCHES ZENTRUM FUR LUFT - UND RAUMFAHRT EV (DLR), Germany
    • FUNDACION CENTRO DE TECNOLOGIAS DE INTERACCION VISUAL Y COMUNICACIONES VICOMTECH (VICOM), Spain
    • DASSAULT AVIATION, France
    • ASOUTI V & SIA OE, Greece
    • OPTIMAD ENGINEERING SRL (Optimad srl), Italy
    • IRT ANTOINE DE SAINT EXUPERY, France
    • ERDYN CONSULTANTS SARL, France
    • ROLLS-ROYCE PLC, United Kingdom
  • Inria contact:
    Pietro Congedo
  • Coordinator:
  • Summary:

    Radical changes in aircraft configurations and operations are required to meet the target of climate-neutral aviation. To foster this transformation, innovative digital methodologies are of utmost importance to enable the optimisation of aircraft performances.

    NEXTAIR will develop and demonstrate innovative design methodologies, data-fusion techniques and smart health-assessment tools enabling the digital transformation of aircraft design, manufacturing and maintenance. NEXTAIR proposes digital enablers covering the whole aircraft life-cycle devoted to ease breakthrough technology maturation, their flawless entry into service and smart health assessment. They will be demonstrated in 8 industrial test cases, representative of multi-physics industrial design, maintenance problems and environmental challenges and interest for aircraft and engines manufacturers.

    NEXTAIR will increase high-fidelity modelling and simulation capabilities to accelerate and derisk new disruptive configurations and breakthrough technologies design. NEXTAIR will also improve the efficiency of uncertainty quantification and robust optimisation techniques to effectively account for manufacturing uncertainty and operational variability in the industrial multi-disciplinary design of aircraft and engine components. Finally, NEXTAIR will extend the usability of machine learning-driven methodologies to contribute to aircraft and engine components' digital twinning for smart prototyping and maintenance.

    NEXTAIR brings together 16 partners from 6 countries specialised in various disciplines: digital tools, advanced modelling and simulation, artificial intelligence, machine learning, aerospace design, and innovative manufacturing. The consortium includes 9 research organisations, 4 leading aeronautical industries providing digital-physical scaled demonstrator aircraft and engines and 2 high-Tech SME providing expertise in industrial scientific computing and data intelligence.

TRACES

TRACES project on cordis.europa.eu

  • Title:
    TRAining the next generation of iCE researcherS
  • Duration:
    From December 1, 2022 to November 30, 2026
  • Partners:
    • ECOLE POLYTECHNIQUE (EP), France
    • SAFRAN AEROSYSTEMS (SAFRAN AEROSYSTEMS SAS), France
    • AIRBUS HELICOPTERS, France
    • SAFRAN AIRCRAFT ENGINES, France
    • INSTITUT POLYTECHNIQUE DE PARIS, France
    • AIRBUS OPERATIONS SAS (AIRBUS OPERATIONS), France
    • INSTITUT SUPERIEUR DE L'AERONAUTIQUE ET DE L'ESPACE (ISAE-Supaero), France
    • AIRBUS DEFENCE AND SPACE GMBH, Germany
    • TECHNISCHE UNIVERSITAET BRAUNSCHWEIG, Germany
    • OFFICE NATIONAL D'ETUDES ET DE RECHERCHES AEROSPATIALES (ONERA), France
    • ACCADEMIA EUROPEA DI BOLZANO (Eurac Research), Italy
    • DASSAULT AVIATION, France
    • POLITECNICO DI MILANO (POLIMI), Italy
    • TECHNISCHE UNIVERSITAT DARMSTADT, Germany
    • LEONARDO - SOCIETA PER AZIONI (LEONARDO), Italy
    • GENERAL ELECTRIC DEUTSCHLAND HOLDING GMBH, Germany
  • Inria contact:
    Pietro Congedo
  • Coordinator:
  • Summary:
    In 2019, the European Aviation Safety Agency (EASA) identified in-flight icing as a priority 1 issue for large aeroplanes with the aggregated European Risk Classification Scheme score being amongst the highest of all safety issues. In-flight icing can occur when an aircraft flies through clouds of supercooled droplets, namely, drops of liquid water with a temperature below the freezing point, which freezes upon impact. Aircraft icing can lead to a reduction of visibility, damage due to ice shedding, blockage of probes and static vents, reduced flight performance, engine power loss, etc. In addition to safety concerns, inservice icing events can lead to major disruption of air operation and aircraft maintenance. The more frequent occurrence of severe thunderstorms due to climate change results in more in-flight accidents also at cruising altitudes, with more than 100 engine failures in recent years. Recently, icing-related issues are being observed in newer, more efficient aircraft engines due to the lower temperature of operation. The main goal of TRACES EJD is to provide high-level training in the field of inflight icing to deliver a new generation of high achieving Early Stage Researchers in the diverse disciplines necessary for mastering the complexity of ice accretion and its mitigation in aircraft and aeroengines. This goal will be achieved by a unique combination of hands-on research training, non-academic placements at major EU aviation industries and courses and workshops on scientific and complementary so-called soft skills facilitated by the academic/non-academic composition of the consortium. Innovative Ice Detection and Ice Protection Systems based on disruptive technologies will be designed by the ESRs during Project Working Group. EASA will provide training on certification procedure and together with major industries in the field will assess the ESRs projects during a team Design and Certify exercise.

10.3 National initiatives

10.3.1 ANR LabCom

MATritime:

Optimisation Robuste et Jumeaux Numériques pour la Transition Maritime - MATritime

  • Title:
    Optimisation Robuste et Jumeaux Numériques pour la Transition Maritime - MATritime
  • Duration:
    From April 1, 2023 to August 1, 2027
  • Partners:
    • INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET AUTOMATIQUE (INRIA), France
    • Bañulsdesign, France
  • Inria contact:
    Olivier Le Maître
  • Coordinator:
    INRIA
  • Summary:
    The maritime sector faces significant challenges: imposed reductions in the energy footprint of maritime transport, the advent of new modes of propulsion (sail, hydrogen), automation, and digitization ... At the same time, the numerical/digital revolution in naval design processes requires a great mastery of multiple complex domains specific to uncertain environments made up of the sea, the atmosphere, and their interface. New advanced procedures are needed to meet the challenges of a more sustainable, greener, and robust maritime industry. Meeting these challenges requires a considerable evolution of engineering practices with the establishment of dedicated processes in Computational Science and Engineering (CSE) based on advanced digital simulation technologies combining physical and statistical models. Indeed, even if the computing resources increase, the limitations of the physical models and the cost of high-fidelity approaches limit the simulations to a few nominal configurations. However, concentrating the simulation effort on a nominal system may be insufficient if the real-world system differs from the simulated one (due to manufacturing tolerances, random intrinsic effects, model error, poorly known environments, ...). In these situations, it is crucial to objectively quantify the uncertainties of the numerical predictions induced by the system's specification errors and model and to account for all these uncertainties during analyses and decision-making processes. This characterization makes it possible to design more robust systems reaching better levels of performance in actual conditions. The project proposes to develop a holistic approach to uncertainties by equipping numerical predictions with probability laws. Depending on the quality of the probabilistic representation, the computational overhead to estimate the prediction uncertainty can be very large. For example, Monte Carlo sampling methods require many simulations to estimate the variance of predictions, with a prohibitive cost when applied directly to detailed physical models. To overcome these limitations without renouncing precise physics, one has to resort to efficient approaches to produce probabilistic predictions at an acceptable cost. For this, we plan to develop methodologies closely associating physical and statistical modeling (e.g., multi-fidelity, multi-level Monte-Carlo, surrogate models, design of numerical experiment). All these methods, as opposed to purely statistical methods (such as Artificial Intelligence), incorporate physical simulations into the statistical processing producing the prediction; in return, they require a great deal of interaction with the experts of physical simulations to be developed. Our objective will be to deploy these numerical approaches and propose advanced uncertainty analyses, robust predictions, and design strategies for maritime applications. These complex applications will lead to developing research in robust multidisciplinary (approaches by subsystems) and multi-objectives design strategies to cover ship design, from component to system optimization. We will also set up a prototype of a ship's digital twin, integrating models and data to support the digitization of the maritime world and prepare future tools for operational issues (optimization of missions, routes, maintenance operations, ...).

11 Dissemination

11.1 Promoting scientific activities

11.1.1 Scientific events: selection

Chair of conference program committees
  • Pietro Marco Congedo has served in the scientific and organizing committee of the 2nd European Symposium on Laminar/Turbulent Transition in Hypersonic Regime (Bordeaux, 2023).
  • Pietro Marco Congedo has served in the scientific committee of the Finite Volumes for Complex Applications 10 (FVCA10) (Strasbourg, Novembre 2023).
  • Pietro Marco Congedo has been the Chair of the Inria-CWI Workshop, held at CWI (Amsterdam, 2023).
  • Olivier Le Maître has served in the scientific committee of the UNCECOMP Conference 2023 (Athens, June 2023).

11.1.2 Journal

Member of the editorial boards
  • Olivier Le Maître is member of the editorial board of the International Journal for Uncertainty Quantification.
  • Pietro Marco Congedo is Editor of the Journal "Mathematics and Computer in Simulation (MATCOM)" from Elsevier.

11.1.3 Invited talks

  • Pietro Marco Congedo has given a seminar at the GDR Mascot-num "Atelier sur la calibration/validation de code numérique", the 31st of May 2023.
  • Pietro Marco Congedo has given a seminar at the Journée de Rentrée de l'EDMH, the 20th of October 2023.
  • Olivier Le Maître has given an invited talk at the Ciroquo Workshop, the 24th of May 2023.

11.1.4 Research administration

  • Pietro Marco Congedo is the Scientific Director of the Inria International Lab CWI-Inria.
  • Olivier Le Maître is the Scientific Director of the MATritime Labcom.

11.2 Teaching - Supervision - Juries

Teaching at University

  • PM Congedo, 2023: ENSTA ParisTech, Palaiseau, Graduate level (20h/y), Numerical methods in Fluid Mechanics.
  • OP Le Maître, 2023: Université Paris Saclay, Doctoral School SMEMAG (22h/y), Uncertainty Quantification Methods.

11.2.1 Supervision

  • Pietro Marco Congedo is the co-advisor of the thesis of Michele Capriati in collaboration with von Karman Institute for Fluid-dynamics (Belgium).
  • Olivier Le Maître is the advisor of the thesis of Malo Pocheau in collaboration with Bañulsesign.
  • Olivier Le Maître is the co-advisor of the thesis of Marius Duvillard in collaboration with CEA Cadarache.
  • Olivier Le Maître is the co-advisor of the thesis of Nadège Polette in collaboration with CEA DAM.
  • Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Meryem Benmahdi, in collaboration with 3DS.
  • Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Sanae Idrissi Janati, in collaboration with CEA Saclay.
  • Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Zachary Jones.
  • Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Christos Papagiannis, in collaboration with LEGI Lab.
  • Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Hugo Dornier, in collaboration with ONERA.
  • Pietro Marco Congedo and Olivier Le Maître are advisors of the thesis of Hugo Nicolas, in collaboration with Bañulsesign (Projet Matritime).
  • Olivier Le Maître has been co-advisor of the thesis of Nicolas Venkovic, in collaboration with CERFACS.

11.2.2 Juries

  • Pietro Marco Congedo has served as committee head in the PhD of Ludovic Coelho (March 9th, 2023) at UP-Saclay.
  • Pietro Marco Congedo has served as preliminary examiner in the PhD of Eero Immonen (April 2023) from LUT University (Finland).
  • Pietro Marco Congedo has served as committee member in the PhD of Nicolas Venkovic (Sept. 11st, 2023) at Université de Bordeaux.
  • Olivier Le Maître has served as committee head in the PhD of Fatia Oukali (Feb. 6th, 2023) at Université Paris Cité.
  • Olivier Le Maître has served as reviewer in the PhD of Nicolas Vauchel (Mar. 14th, 2023) at Université de Lille.
  • Olivier Le Maître has served as reviewer in the HdR of Azouz Naoufel (Mar. 20th, 2023) at UPSaclay.

11.3 Popularization

11.3.1 Internal or external Inria responsibilities

  • Pietro Marco Congedo is the Coordinator of "Maths/Engineering" Program of the Labex Mathématiques Hadamard (IPP and Paris-Saclay University), since 2022.
  • Olivier Le Maître is member of the Conseil du Laboratoire du CMAP (Ecole Polytechnique, IPP).
  • Olivier Le Maître is the Adjoint Director of the Ecole Doctorale de Mathématiques Hadamard (EDMH).
  • Olivier Le Maître is member of the PhD Track Program Committee at IP-Paris.
  • Pietro Marco Congedo is the coordinator of the "Pôle Analyse" of CMAP Lab (Ecole Polytechnique, IPP).
  • Olivier Le Maître is the corresponding member of the Inria SIF center with the French Agency for Math and Industry (AMIES), since 2019.

12 Scientific production

12.1 Major publications

12.2 Publications of the year

International journals

International peer-reviewed conferences

  • 15 inproceedingsM.Michele Capriati, A.Anabel del Val, T. E.Thomas E Schwartzentruber, T. K.Timothy K Minton, P. M.Pietro Marco Congedo and T. E.Thierry E Magin. Bayesian calibration of a finite-rate nitridation model from molecular beam and plasma wind tunnel experiments.Aerospace Europe Conference 2023 – Joint 10th EUCASS – 9th CEAS Cp,fere,ceLausanne, SwitzerlandJuly 2023HALDOI

Edition (books, proceedings, special issue of a journal)

  • 16 proceedingsM.Meryem BenmahdiP. M.Pietro Marco CongedoO.Olivier Le MaitreUncertainty quantification and calibration of one-dimensional arterial hemodynamics.Computational Fluids ConferenceApril 2023HALback to text

Reports & preprints

  1. 1EPIC: Industrial or Commercial Public Entity.