EN FR
EN FR

2024Activity reportProject-TeamAIRSEA

RNSR: 201521159N
  • Research center Inria Centre at Université Grenoble Alpes
  • In partnership with:Université de Grenoble Alpes, CNRS
  • Team name: Mathematics and computing applied to oceanic and atmospheric flows
  • In collaboration with:Laboratoire Jean Kuntzmann (LJK)
  • Domain:Digital Health, Biology and Earth
  • Theme:Earth, Environmental and Energy Sciences

Keywords

Computer Science and Digital Science

  • A3.1.8. Big data (production, storage, transfer)
  • A3.4.1. Supervised learning
  • A3.4.2. Unsupervised learning
  • A3.4.5. Bayesian methods
  • A3.4.6. Neural networks
  • A3.4.7. Kernel methods
  • A3.4.8. Deep learning
  • A6.1.1. Continuous Modeling (PDE, ODE)
  • A6.1.2. Stochastic Modeling
  • A6.1.4. Multiscale modeling
  • A6.1.5. Multiphysics modeling
  • A6.2.1. Numerical analysis of PDE and ODE
  • A6.2.4. Statistical methods
  • A6.2.6. Optimization
  • A6.2.7. High performance computing
  • A6.3.1. Inverse problems
  • A6.3.2. Data assimilation
  • A6.3.4. Model reduction
  • A6.3.5. Uncertainty Quantification
  • A6.4.6. Optimal control
  • A6.5.2. Fluid mechanics
  • A6.5.4. Waves

Other Research Topics and Application Domains

  • B3.2. Climate and meteorology
  • B3.3.2. Water: sea & ocean, lake & river
  • B3.3.4. Atmosphere
  • B3.4.1. Natural risks
  • B4.3.2. Hydro-energy
  • B4.3.3. Wind energy
  • B9.11.1. Environmental risks

1 Team members, visitors, external collaborators

Research Scientists

  • Arthur Vidard [Team leader, INRIA, Researcher, HDR]
  • Laurent Debreu [INRIA, Senior Researcher, HDR]
  • Eugene Kazantsev [INRIA, Researcher]
  • Florian Lemarie [INRIA, Researcher]
  • Gurvan Madec [CNRS, Senior Researcher, HDR]
  • Olivier Zahm [INRIA, Researcher]

Faculty Members

  • Elise Arnaud [UGA, Associate Professor]
  • Éric Blayo [UGA, Professor, HDR]
  • Christine Kazantsev [UGA, Associate Professor]
  • Clémentine Prieur [UGA, Professor, HDR]
  • Martin Schreiber [UGA, Professor]

Post-Doctoral Fellows

  • Alexis Anagnostakis [INRIA, Post-Doctoral Fellow, from Sep 2024 until Nov 2024]
  • Alexis Anagostakis [UGA, until Aug 2024]
  • Adama Barry [IFPEN]
  • Valentin Breaz [INRIA, Post-Doctoral Fellow, until Jun 2024]
  • Hugo Brunie [UGA, Post-Doctoral Fellow]
  • Simon Clement [INRIA, Post-Doctoral Fellow, until Jun 2024]

PhD Students

  • Lorenzo Calzolari [IFPEN, from Nov 2024]
  • Qiao Chen [INRIA, from Oct 2024]
  • Qiao Chen [UGA, until Sep 2024]
  • Gabriel Derrida [INRIA]
  • Clément Duhamel [UGA, ATER, from Oct 2024]
  • Clément Duhamel [UGA, from Jul 2024 until Sep 2024]
  • Clément Duhamel [INRIA, until Jun 2024]
  • Helene Henon [INRIA]
  • Pierre Lozano [UGA]
  • Exauce Luweh Adjim Ngarti [BULL, CIFRE]
  • Manolis Perrot [INRIA, from Oct 2024]
  • Manolis Perrot [UGA, until Sep 2024]
  • Katarina Radisic [INRIA, from Dec 2024]
  • Katarina Radisic [INRAE]
  • Julien Remy [UGA, from Nov 2024]
  • Angelique Saillet [UGA]
  • Robin Vaudry [CNRS, until Sep 2024]
  • Romain Verdiere [INRIA]
  • Ri Wang [CSC Scholarship, until Oct 2024]
  • Benjamin Zanger [INRIA]

Technical Staff

  • Céline Acary Robert [UGA, Engineer]
  • Maurice Bremond [INRIA, Engineer]
  • Sebastien Valat [INRIA, Engineer, until Jun 2024]

Interns and Apprentices

  • Isaora Bacquet [UGA, Intern, until Jun 2024]
  • Gianluca Cappellari [INRIA, Intern, from Mar 2024 until Jul 2024]
  • Charley Gay [UGA, Intern, from Mar 2024 until Aug 2024]
  • Vincent Meduski [LJK, Intern, from May 2024 until Jul 2024]
  • Maelys Moro [UGA, Intern, from May 2024 until Aug 2024]
  • Sergio Murillo Garcia [INRIA, Intern, from May 2024 until Jul 2024]
  • Julien Remy [UGA, Intern, from Feb 2024 until Sep 2024]
  • Philippe Rosales [UGA, Intern, from May 2024 until Aug 2024]

Administrative Assistant

  • Luce Coelho [INRIA]

2 Overall objectives

The general scope of the AIRSEA project-team is to develop mathematical and computational methods for the modeling of oceanic and atmospheric flows. The mathematical tools used involve both deterministic and statistical approaches. The main research topics cover a) modeling and coupling b) model reduction for sensitivity analysis, coupling and multiscale optimizations c) sensitivity analysis, parameter estimation and risk assessment d) algorithms for high performance computing. The range of application is from climate modeling to the prediction of extreme events.

3 Research program

Recent events have raised questions regarding the social and economic implications of anthropic alterations of the Earth system, i.e. climate change and the associated risks of increasing extreme events. Ocean and atmosphere, coupled with other components (continent and ice) are the building blocks of the Earth system. A better understanding of the ocean atmosphere system is a key ingredient for improving prediction of such events. Numerical models are essential tools to understand processes, and simulate and forecast events at various space and time scales. Geophysical flows generally have a number of characteristics that make it difficult to model them. This justifies the development of specifically adapted mathematical methods:

  • Geophysical flows are strongly non-linear. Therefore, they exhibit interactions between different scales, and unresolved small scales (smaller than mesh size) of the flows have to be parameterized in the equations.
  • Geophysical fluids are non closed systems. They are open-ended in their scope for including and dynamically coupling different physical processes (e.g., atmosphere, ocean, continental water, etc). Coupling algorithms are thus of primary importance to account for potentially significant feedback.
  • Numerical models contain parameters which cannot be estimated accurately either because they are difficult to measure or because they represent some poorly known subgrid phenomena. There is thus a need for dealing with uncertainties. This is further complicated by the turbulent nature of geophysical fluids.
  • The computational cost of geophysical flow simulations is huge, thus requiring the use of reduced models, multiscale methods and the design of algorithms ready for high performance computing platforms.

Our scientific objectives are divided into four major points. The first objective focuses on developing advanced mathematical methods for both the ocean and atmosphere, and the coupling of these two components. The second objective is to investigate the derivation and use of model reduction to face problems associated with the numerical cost of our applications. The third objective is directed toward the management of uncertainty in numerical simulations. The last objective deals with efficient numerical algorithms for new computing platforms. As mentioned above, the targeted applications cover oceanic and atmospheric modeling and related extreme events using a hierarchy of models of increasing complexity.

3.1 Modeling for oceanic and atmospheric flows

Current numerical oceanic and atmospheric models suffer from a number of well-identified problems. These problems are mainly related to lack of horizontal and vertical resolution, thus requiring the parameterization of unresolved (subgrid scale) processes and control of discretization errors in order to fulfill criteria related to the particular underlying physics of rotating and strongly stratified flows. Oceanic and atmospheric coupled models are increasingly used in a wide range of applications from global to regional scales. Assessment of the reliability of those coupled models is an emerging topic as the spread among the solutions of existing models (e.g., for climate change predictions) has not been reduced with the new generation models when compared to the older ones.

Advanced methods for modeling 3D rotating and stratified flows The continuous increase of computational power and the resulting finer grid resolutions have triggered a recent regain of interest in numerical methods and their relation to physical processes. Going beyond present knowledge requires a better understanding of numerical dispersion/dissipation ranges and their connection to model fine scales. Removing the leading order truncation error of numerical schemes is thus an active topic of research and each mathematical tool has to adapt to the characteristics of three dimensional stratified and rotating flows. Studying the link between discretization errors and subgrid scale parameterizations is also arguably one of the main challenges.

Complexity of the geometry, boundary layers, strong stratification and lack of resolution are the main sources of discretization errors in the numerical simulation of geophysical flows. This emphasizes the importance of the definition of the computational grids (and coordinate systems) both in horizontal and vertical directions, and the necessity of truly multi resolution approaches. At the same time, the role of the small scale dynamics on large scale circulation has to be taken into account. Such parameterizations may be of deterministic as well as stochastic nature and both approaches are taken by the AIRSEA team. The design of numerical schemes consistent with the parameterizations is also arguably one of the main challenges for the coming years. This work is complementary and linked to that on parameters estimation described in 3.3.

Ocean Atmosphere interactions and formulation of coupled models State-of-the-art climate models (CMs) are complex systems under continuous development. A fundamental aspect of climate modeling is the representation of air-sea interactions. This covers a large range of issues: parameterizations of atmospheric and oceanic boundary layers, estimation of air-sea fluxes, time-space numerical schemes, non conforming grids, coupling algorithms ...Many developments related to these different aspects were performed over the last 10-15 years, but were in general conducted independently of each other.

The aim of our work is to revisit and enrich several aspects of the representation of air-sea interactions in CMs, paying special attention to their overall consistency with appropriate mathematical tools. We intend to work consistently on the physics and numerics. Using the theoretical framework of global-in-time Schwarz methods, our aim is to analyze the mathematical formulation of the parameterizations in a coupling perspective. From this study, we expect improved predictability in coupled models (this aspect will be studied using techniques described in 3.3). Complementary work on space-time nonconformities and acceleration of convergence of Schwarz-like iterative methods (see 7.1.2) are also conducted.

3.2 Model reduction / multiscale algorithms

The high computational cost of the applications is a common and major concern to have in mind when deriving new methodological approaches. This cost increases dramatically with the use of sensitivity analysis or parameter estimation methods, and more generally with methods that require a potentially large number of model integrations.

A dimension reduction, using either stochastic or deterministic methods, is a way to reduce significantly the number of degrees of freedom, and therefore the calculation time, of a numerical model.

Model reduction Reduction methods can be deterministic (proper orthogonal decomposition, other reduced bases) or stochastic (polynomial chaos, Gaussian processes, kriging), and both fields of research are very active. Choosing one method over another strongly depends on the targeted application, which can be as varied as real-time computation, sensitivity analysis (see e.g., section 7.3.1) or optimisation for parameter estimation (see below).

Our goals are multiple, but they share a common need for certified error bounds on the output. Our team has a 4-year history of working on certified reduction methods and has a unique positioning at the interface between deterministic and stochastic approaches. Thus, it seems interesting to conduct a thorough comparison of the two alternatives in the context of sensitivity analysis. Efforts will also be directed toward the development of efficient greedy algorithms for the reduction, and the derivation of goal-oriented sharp error bounds for non linear models and/or non linear outputs of interest. This will be complementary to our work on the deterministic reduction of parametrized viscous Burgers and Shallow Water equations where the objective is to obtain sharp error bounds to provide confidence intervals for the estimation of sensitivity indices.

Reduced models for coupling applications Global and regional high-resolution oceanic models are either coupled to an atmospheric model or forced at the air-sea interface by fluxes computed empirically preventing proper physical feedback between the two media. Thanks to high-resolution observational studies, the existence of air-sea interactions at oceanic mesoscales (i.e., at 𝒪(1km) scales) have been unambiguously shown. Those interactions can be represented in coupled models only if the oceanic and atmospheric models are run on the same high-resolution computational grid, and are absent in a forced mode. Fully coupled models at high-resolution are seldom used because of their prohibitive computational cost. The derivation of a reduced model as an alternative between a forced mode and the use of a full atmospheric model is an open problem.

Multiphysics coupling often requires iterative methods to obtain a mathematically correct numerical solution. To mitigate the cost of the iterations, we will investigate the possibility of using reduced-order models for the iterative process. We will consider different ways of deriving a reduced model: coarsening of the resolution, degradation of the physics and/or numerical schemes, or simplification of the governing equations. At a mathematical level, we will strive to study the well-posedness and the convergence properties when reduced models are used. Indeed, running an atmospheric model at the same resolution as the ocean model is generally too expensive to be manageable, even for moderate resolution applications. To account for important fine-scale interactions in the computation of the air-sea boundary condition, the objective is to derive a simplified boundary layer model that is able to represent important 3D turbulent features in the marine atmospheric boundary layer.

Reduced models for multiscale optimization The field of multigrid methods for optimisation has known a tremendous development over the past few decades. However, it has not been applied to oceanic and atmospheric problems apart from some crude (non-converging) approximations or applications to simplified and low dimensional models. This is mainly due to the high complexity of such models and to the difficulty in handling several grids at the same time. Moreover, due to complex boundaries and physical phenomena, the grid interactions and transfer operators are not trivial to define.

Multigrid solvers (or multigrid preconditioners) are efficient methods for the solution of variational data assimilation problems. We would like to take advantage of these methods to tackle the optimization problem in high dimensional space. High dimensional control space is obtained when dealing with parameter fields estimation, or with control of the full 4D (space time) trajectory. It is important since it enables us to take into account model errors. In that case, multigrid methods can be used to solve the large scales of the problem at a lower cost, this being potentially coupled with a scale decomposition of the variables themselves.

3.3 Dealing with uncertainties

There are many sources of uncertainties in numerical models. They are due to imperfect external forcing, poorly known parameters, missing physics and discretization errors. Studying these uncertainties and their impact on the simulations is a challenge, mostly because of the high dimensionality and non-linear nature of the systems. To deal with these uncertainties we work on three axes of research, which are linked: sensitivity analysis, parameter estimation and risk assessment. They are based on either stochastic or deterministic methods.

Sensitivity analysis Sensitivity analysis (SA), which links uncertainty in the model inputs to uncertainty in the model outputs, is a powerful tool for model design and validation. First, it can be a pre-stage for parameter estimation (see 3.3), allowing for the selection of the more significant parameters. Second, SA permits understanding and quantifying (possibly non-linear) interactions induced by the different processes defining e.g., realistic ocean atmosphere models. Finally SA allows for validation of models, checking that the estimated sensitivities are consistent with what is expected by the theory. On ocean, atmosphere and coupled systems, only first order deterministic SA are performed, neglecting the initialization process (data assimilation). AIRSEA members and collaborators proposed to use second order information to provide consistent sensitivity measures, but so far it has only been applied to simple academic systems. Metamodels are now commonly used, due to the cost induced by each evaluation of complex numerical models: mostly Gaussian processes, whose probabilistic framework allows for the development of specific adaptive designs, and polynomial chaos not only in the context of intrusive Galerkin approaches but also in a black-box approach. Until recently, global SA was based primarily on a set of engineering practices. New mathematical and methodological developments have led to the numerical computation of Sobol' indices, with confidence intervals assessing for both metamodel and estimation errors. Approaches have also been extended to the case of dependent entries, functional inputs and/or output and stochastic numerical codes. Other types of indices and generalizations of Sobol' indices have also been introduced.

Concerning the stochastic approach to SA we plan to work with parameters that show spatio-temporal dependencies and to continue toward more realistic applications where the input space is of huge dimension with highly correlated components. Sensitivity analysis for dependent inputs also introduces new challenges. In our applicative context, it would seem prudent to carefully learn the spatio-temporal dependences before running a global SA. In the deterministic framework we focus on second order approaches where the sought sensitivities are related to the optimality system rather than to the model; i.e., we consider the whole forecasting system (model plus initialization through data assimilation).

All these methods allow for computing sensitivities and more importantly a posteriori error statistics.

Parameter estimation Advanced parameter estimation methods are barely used in ocean, atmosphere and coupled systems, mostly due to a difficulty of deriving adequate response functions, a lack of knowledge of these methods in the ocean-atmosphere community, and also to the huge associated computing costs. In the presence of strong uncertainties on the model but also on parameter values, simulation and inference are closely associated. Filtering for data assimilation and Approximate Bayesian Computation (ABC) are two examples of such association.

Stochastic approach can be compared with the deterministic approach, which allows to determine the sensitivity of the flow to parameters and optimize their values relying on data assimilation. This approach is already shown to be capable of selecting a reduced space of the most influent parameters in the local parameter space and to adapt their values in view of correcting errors committed by the numerical approximation. This approach assumes the use of automatic differentiation of the source code with respect to the model parameters, and optimization of the obtained raw code.

AIRSEA assembles all the required expertise to tackle these difficulties. As mentioned previously, the choice of parameterization schemes and their tuning has a significant impact on the result of model simulations. Our research will focus on parameter estimation for parameterized Partial Differential Equations (PDEs) and also for parameterized Stochastic Differential Equations (SDEs). Deterministic approaches are based on optimal control methods and are local in the parameter space (i.e., the result depends on the starting point of the estimation) but thanks to adjoint methods they can cope with a large number of unknowns that can also vary in space and time. Multiscale optimization techniques as described in 7.2 will be one of the tools used. This in turn can be used either to propose a better (and smaller) parameter set or as a criterion for discriminating parameterization schemes. Statistical methods are global in the parameter state but may suffer from the curse of dimensionality. However, the notion of parameter can also be extended to functional parameters. We may consider as parameter a functional entity such as a boundary condition on time, or a probability density function in a stationary regime. For these purposes, non-parametric estimation will also be considered as an alternative.

Risk assessment Risk assessment in the multivariate setting suffers from a lack of consensus on the choice of indicators. Moreover, once the indicators are designed, it still remains to develop estimation procedures, efficient even for high risk levels. Recent developments for the assessment of financial risk have to be considered with caution as methods may differ pertaining to general financial decisions or environmental risk assessment. Modeling and quantifying uncertainties related to extreme events is of central interest in environmental sciences. In relation to our scientific targets, risk assessment is very important in several areas: hydrological extreme events, cyclone intensity, storm surges...Environmental risks most of the time involve several aspects which are often correlated. Moreover, even in the ideal case where the focus is on a single risk source, we have to face the temporal and spatial nature of environmental extreme events. The study of extremes within a spatio-temporal framework remains an emerging field where the development of adapted statistical methods could lead to major progress in terms of geophysical understanding and risk assessment thus coupling data and model information for risk assessment.

Based on the above considerations we aim to answer the following scientific questions: how to measure risk in a multivariate/spatial framework? How to estimate risk in a non stationary context? How to reduce dimension (see 3.2) for a better estimation of spatial risk?

Extreme events are rare, which means there is little data available to make inferences of risk measures. Risk assessment based on observation therefore relies on multivariate extreme value theory. Interacting particle systems for the analysis of rare events is commonly used in the community of computer experiments. An open question is the pertinence of such tools for the evaluation of environmental risk.

Most numerical models are unable to accurately reproduce extreme events. There is therefore a real need to develop efficient assimilation methods for the coupling of numerical models and extreme data.

3.4 High performance computing

Methods for sensitivity analysis, parameter estimation and risk assessment are extremely costly due to the necessary number of model evaluations. This number of simulations require considerable computational resources, depends on the complexity of the application, the number of input variables and desired quality of approximations. To this aim, the AIRSEA team is an intensive user of HPC computing platforms, particularly grid computing platforms. The associated grid deployment has to take into account the scheduling of a huge number of computational requests and the links with data-management between these requests, all of these as automatically as possible. In addition, there is an increasing need to propose efficient numerical algorithms specifically designed for new (or future) computing architectures and this is part of our scientific objectives. According to the computational cost of our applications, the evolution of high performance computing platforms has to be taken into account for several reasons. While our applications are able to exploit space parallelism to its full extent (oceanic and atmospheric models are traditionally based on a spatial domain decomposition method), the spatial discretization step size limits the efficiency of traditional parallel methods. Thus the inherent parallelism is modest, particularly for the case of relative coarse resolution but with very long integration time (e.g., climate modeling). Paths toward new programming paradigms are thus needed. As a step in that direction, we plan to focus our research on parallel in time methods.

New numerical algorithms for high performance computing Parallel in time methods can be classified into three main groups. In the first group, we find methods using parallelism across the method, such as parallel integrators for ordinary differential equations. The second group considers parallelism across the problem. Falling into this category are methods such as waveform relaxation where the space-time system is decomposed into a set of subsystems which can then be solved independently using some form of relaxation techniques or multigrid reduction in time. The third group of methods focuses on parallelism across the steps. One of the best known algorithms in this family is parareal. Other methods combining the strengths of those listed above (e.g., PFASST) are currently under investigation in the community.

Parallel in time methods are iterative methods that may require a large number of iteration before convergence. Our first focus will be on the convergence analysis of parallel in time (Parareal / Schwarz) methods for the equation systems of oceanic and atmospheric models. Our second objective will be on the construction of fast (approximate) integrators for these systems. This part is naturally linked to the model reduction methods of Section 7.2.1. Fast approximate integrators are required both in the Schwarz algorithm (where a first guess of the boundary conditions is required) and in the Parareal algorithm (where the fast integrator is used to connect the different time windows). Our main application of these methods will be on climate (i.e., very long time) simulations. Our second application of parallel in time methods will be in the context of optimization methods. In fact, one of the major drawbacks of the optimal control techniques used in 3.3 is a lack of intrinsic parallelism in comparison with ensemble methods. Here, parallel in time methods also offer ways to better efficiency. The mathematical key point is centered on how to efficiently couple two iterative methods (i.e., parallel in time and optimization methods).

4 Application domains

The Ocean-Atmosphere System

The evolution of natural systems, in the short, mid, or long term, has extremely important consequences for both the global Earth system and humanity. Forecasting this evolution is thus a major challenge from the scientific, economic, and human viewpoints.

Humanity has to face the problem of global warming, brought on by the emission of greenhouse gases from human activities. This warming will probably cause huge changes at global and regional scales, in terms of climate, vegetation and biodiversity, with major consequences for local populations. Research has therefore been conducted over the past 15 to 20 years in an effort to model the Earth's climate and forecast its evolution in the 21st century in response to anthropic action.

With regard to short-term forecasts, the best and oldest example is of course weather forecasting. Meteorological services have been providing daily short-term forecasts for several decades which are of crucial importance for numerous human activities.

Numerous other problems can also be mentioned, like seasonal weather forecasting (to enable powerful phenomena like an El Nin˜o event or a drought period to be anticipated a few months in advance), operational oceanography (short-term forecasts of the evolution of the ocean system to provide services for the fishing industry, ship routing, defense, or the fight against marine pollution) or the prediction of floods.

As mentioned previously, mathematical and numerical tools are omnipresent and play a fundamental role in these areas of research. In this context, the vocation of AIRSEA is not to carry out numerical prediction, but to address mathematical issues raised by the development of prediction systems for these application fields, in close collaboration with geophysicists.

5 Social and environmental responsibility

Most of the research activities of the AIRSEA team are directed towards the improvement of numerical systems of the ocean and the atmosphere. This includes the development of appropriate numerical methods, model/parameter calibration using observational data and uncertainty quantification for decision making. The AIRSEA team members work in close collaboration with the researchers in the field of geophyscial fluid and are partners of several interdisciplinary projects. They also strongly contribute to the development of state of the art numerical systems, like NEMO and CROCO in the ocean community.

6 New software, platforms, open data

6.1 New software

6.1.1 AGRIF

  • Name:
    Adaptive Grid Refinement In Fortran
  • Keyword:
    Mesh refinement
  • Scientific Description:
    AGRIF is a Fortran 90 package for the integration of full adaptive mesh refinement (AMR) features within a multidimensional finite difference model written in Fortran. Its main objective is to simplify the integration of AMR potentialities within an existing model with minimal changes. Capabilities of this package include the management of an arbitrary number of grids, horizontal and/or vertical refinements, dynamic regridding, parallelization of the grids interactions on distributed memory computers. AGRIF requires the model to be discretized on a structured grid, like it is typically done in ocean or atmosphere modelling.
  • Functional Description:
    AGRIF is a Fortran 90 package for the integration of full adaptive mesh refinement (AMR) features within a multidimensional finite difference model written in Fortran. Its main objective is to simplify the integration of AMR potentialities within an existing model with minimal changes. Capabilities of this package include the management of an arbitrary number of grids, horizontal and/or vertical refinements, dynamic regridding, parallelization of the grids interactions on distributed memory computers. AGRIF requires the model to be discretized on a structured grid, like it is typically done in ocean or atmosphere modelling.
  • News of the Year:
    Within the framework of a European Copernicus contract, improvements have been made to the management of parallelization ( assignment of processors to computational grids).
  • URL:
  • Publications:
  • Contact:
    Laurent Debreu
  • Participant:
    Laurent Debreu

6.1.2 NEMOVAR

  • Name:
    Variational data assimilation for NEMO
  • Keywords:
    Oceanography, Data assimilation, Adjoint method, Optimal control
  • Functional Description:
    NEMOVAR is a state-of-the-art multi-incremental variational data assimilation system with both 3D and 4D var capabilities, and which is designed to work with NEMO on the native ORCA grids. The background error covariance matrix is modelled using balance operators for the multivariate component and a diffusion operator for the univariate component. It can also be formulated as a linear combination of covariance models to take into account multiple correlation length scales associated with ocean variability on different scales. NEMOVAR has recently been enhanced with the addition of ensemble data assimilation and multi-grid assimilation capabilities. It is used operationnaly in both ECMWF and the Met Office (UK)
  • Contact:
    Patrick Vidard
  • Partners:
    CERFACS, ECMWF, Met Office

6.1.3 SWEET

  • Name:
    Shallow Water Equation Environment for Tests, Awesome!
  • Keywords:
    High-Performance Computing, Time integration methods
  • Functional Description:

    SWEET supports periodic boundary conditions for - the bi-periodic plane (2D torus) - the sphere

    Space discretization - PLANE: Spectral methods based on Fourier space - PLANE: Finite differences - SPHERE: Spherical Harmonics

    Time discretization - Explicit RK - Implicit RK - Crank-Nicolson - Semi-Lagrangian - Parallel-in-time - Parareal - PFASST - Rational approximation of exponential Integrators (REXI) ...and many more time steppers...

    Special features - Graphical user interface - Fast Helmholtz solver in spectral space - Easy-to-code in C++ ...

    There’s support for various applications - Shallow-water equations on plane/sphere - Advection - Burgers’ ...

  • URL:
  • Contact:
    Martin Schreiber
  • Partners:
    University of São Paulo, Technical University of Munich (TUM)

7 New results

7.1 Modeling for Oceanic and Atmospheric flows

7.1.1 Numerical Schemes for Ocean Modeling

Participants: Eric Blayo, Laurent Debreu, Florian Lemarié, Gurvan Madec, Antoine Nasser, Pierre Lozano.

Dealing with complex geometries

Accurate and stable implementation of bathymetry boundary conditions in ocean models remains a challenging problem. Generalized terrain-following coordinates are often used in ocean models, but they require smoothing the bathymetry to reduce pressure gradient errors. Geopotential z-coordinates are a common alternative that avoid pressure gradient and numerical diapycnal diffusion errors, but they generate spurious flow due to their “staircase” geometry. In 50, we introduce a new Brinkman volume penalization to approximate the no-slip boundary condition and complex geometry of bathymetry in ocean models. This approach corrects the staircase effect of z-coordinates, does not introduce any new stability constraints on the geometry of the bathymetry and is easy to implement in an existing ocean model. The porosity parameter allows modelling subgrid scale details of the geometry. As an illustration, through the use of penalization methods, the Gulf Stream detachment is correctly represented in a 1/8 degree simulation (see Figure 1). These new results on realistic applications have been published in 51. This opens the door to a clear improvement of climate models in which a good representation of this mechanism is essential. This work has been extended to z coordinate ocean models through the PhD work of A. Nasser 65. We have also investigated the representation of coastlines and its sensitivity to the mesh orientation in 66.

Beyond the hydrostatic assumption

With the increase of resolution, the hydrostatic assumption becomes less valid and the AIRSEA group also works on the development of non-hydrostatic ocean models. The treatment of non-hydrostatic incompressible flows leads to a 3D elliptic system for pressure that can be ill-conditioned, in particular with non geopotential vertical coordinates. That is why we favor the use of the non-hydrostatic compressible equations, that remove the need for a 3D resolution at the price of re-including acoustic waves. For that purposes a detailed analysis of acoustic-gravity waves in a free-surface compressible and stratified ocean was carried out 47 in part in the PhD of E. Duval. The proposed numerical approach has been implemented in the CROCO ocean model and tested in various flow configurations 56, 62.

Most large scale ocean models are based on the so-called “primitive equations”, which use the hydrostatic and incompressibility assumptions. However, with the increase of resolution, a systematic use of the hydrostatic assumption becomes less valid. The French regional oceanic modeling system CROCO (Coastal and Regional Ocean COmmunity model, www.croco-ocean.org) developed these last years allows for the use of either the hydrostatic incompressible (HI) equations and the non-hydrostatic compressible (NHC) equations, the latter being much more computationally expensive. A natural idea is thus to limit the use of the NHC version to some particular regions of interest where the hydrostatic assumption is not relevant, and to nest such local NHC zooms within a larger model using the HI version. However such a coupling is quite delicate from a mathematical point of view, due to the different nature of hydrostatic and nonhydrostatic equations (where the vertical velocity is either a diagnostic or a prognostic variable). In his PhD, P. Lozano is working on the design of methods to couple local nonhydrostatic models to larger scale hydrostatic ones. He analyzed the constraints, in frequency space, associated with wave propagation between the hydrostatic and non-hydrostatic domains, and diagnosed their impact within a numerical model. This has been numerically validated in simplified configurations allowing the use of analytical solutions. Avenues are being explored to improve current coupling procedures, notably through the use of vertical mode decomposition and/or Perfectly Matched Layers techniques.

Mean sea surface height in CROCO simulations at different resolutions for the standard case with terrain-following (σ) coordinates (top) and penalization (below). The third row shows the AVISO product for comparison

Figure 1: Mean sea surface height in CROCO simulations at different resolutions for the standard case with terrain-following (σ) coordinates (top) and penalization (below). The third row shows the AVISO product for comparison

7.1.2 Coupling Methods for Oceanic and Atmospheric Models and representation of the Air-Sea Interface

Participants: Eric Blayo, Simon Clément, Florian Lemarié.

The Airsea team is involved in the modeling and algorithmic aspects of ocean-atmosphere (OA) coupling. We have been actively working on the analysis of such coupling both in terms of continuous and numerical formulations. Particular attention is paid to the inclusion of physical parameterizations in our theoretical framework. Our activities have led to practical implementations in state-of-the-art oceanic and Earth system models. Our focus during the last few years has been on the following topics:

  1. Continuous and discrete analysis of Schwarz algorithms for OA coupling    Members of the Airsea team have been developing coupling approaches for several years, based on so-called Schwarz algorithms. Schwarz-like domain decomposition methods are very popular in mathematics, computational sciences and engineering notably for the implementation of coupling strategies. However, for complex applications (like in OA coupling) it is challenging to have an a priori knowledge of the convergence properties of such methods. Indeed coupled problems arising in Earth system modeling often exhibit sharp turbulent boundary layers whose parameterizations lead to peculiar transmission conditions and diffusion coefficients 67. In 11, the well-posedness of the non-linear coupling problem including parameterizations has been addressed and a detailed continuous and discrete analysis of the convergence properties of the Schwarz methods has been pursued to entangle the impact of the different parameters at play in such coupling problem. A general framework has been proposed to study the convergence properties at a (semi)-discrete level to allow a systematic comparison with the results obtained from the continuous problem. Such a framework allows the study of more complex coupling problems whose formulation is representative of the discretization used in realistic coupled models.
  2. A simplified atmospheric boundary layer model for oceanic purposes    Part of our activities within the ongoing SHOM 19CP07 project is dedicated to the development of a simplified model of the marine atmospheric boundary layer (called ABL1d) of intermediate complexity between a bulk parameterization and a full three- dimensional atmospheric model and to its integration to the NEMO general circulation model 60. A constraint in the conception of such a simplified model is to allow an apt representation of the downward momentum mixing mechanism and partial re-energization 7 of the ocean by the atmosphere while keeping the computational efficiency and flexibility inherent to ocean-only modeling. Realistic applications of the coupled NEMO-ABL1d modeling system have been carried out and the methodology is being tested for integration into the operational forecasting system operated by Mercator-Ocean 3. Over the last year the approach has also been implemented in the CROCO ocean model. A focus has been to find adequate ways to fill some gaps in the 1D approach using multiple scales asymptotic techniques to cast the equations in terms of perturbations around an ambient state given by a large-scale datasets. Such simplified model called ABL3d leads to clear improvements over ABL1d for academic semi-idealized cases. The objective is now to extend the analysis to realistic cases in the framework of the ENMASSE project funded by the Copernicus Marine Environment Monitoring Service (CMEMS). In parallel, in the framework of the AIRSEA/Eviden collaboration, an objective is to design a surrogate via learning strategies of the response of the atmospheric boundary layer to anomalies in ocean surface temperatures and currents. A CIFRE thesis should be implemented on this subject during the year 2025.
  3. Impact of the coupling formulation in a realistic context    A Schwarz-like iterative method has been applied in a state-of-the-art Earth-System model (IPSL-CM6) to evaluate the consequences of inaccuracies in the usual ad-hoc ocean-atmosphere coupling algorithms used in realistic models 63, 64. Numerical results obtained with an iterative process show large differences at sunrise and sunset compared to usual ad-hoc algorithms, thus showing that synchrony errors inherent to ad-hoc coupling methods can be large. However, such an iterative coupling method is too costly to be implemented operationally in climate models. In order to keep the computational cost almost constant w.r.t. the usual non iterative approach, the iterative algorithm would need to be provided with a first guess close to the optimum, so as to achieve quasi-convergence in a single iteration. We aim to obtain such an approximation of the optimal state by learning techniques (work of A. Monsimer as part of the programme national de recherche en intelligence artificielle - PNRIA). A learning network based on a test dataset restricted to a few fixed regions has been set up, and is indeed able to provide a first iteration of good quality. The remaining step is to scale up to a global scale, with a view to making the method operationally applicable to a climate model. As part of V. Schüller's thesis in collaboration with Lund University, a single-column version of the EC-Earth climate model is being used to further our study of coupling algorithms. The goal is to extend the analysis of 63 using a less complex model that remains representative of the parameterization schemes employed in 3D models. This single-column model has made it possible to focus on ocean-atmosphere coupling in the presence of sea ice. It was identified that the convergence of an iterative coupling in this context is compromised due to non-differentiabilities in the parameterization of albedo over sea-ice. This work is currently being finalized, and a publication is in preparation.

These topics are addressed through strong collaborations between the applied mathematicians and the climate and operational community (Meteo-France, Ifremer, SHOM, Mercator-Ocean, LMD, and LOCEAN). Airsea team members play a major role in the structuration of a multi-disciplinary scientific community working on ocean-atmosphere coupling spanning a broad range from mathematical theory to practical implementations in climate and operational models.

7.1.3 Physics-Dynamics coupling: Consistent subgrid-scale modeling

Participants: Eric Blayo, Simon Clément, Florian Lemarié, Manolis Perrot.

For approximately 5 years, the AIRSEA team has started to work on new topics around physics-dynamics coupling 55. Schematically, numerical models consist of two blocks generally identified as “physics” and “dynamics” which are often developed separately. The “Physics” represents unresolved or under-resolved processes with typical scales below model resolution while the “dynamics” corresponds to a discrete representation in space and time of resolved processes. Unresolved processes cannot be ignored because they directly influence the resolved part of the flow since energy is continuously transferred between scales. The interplay between resolved and unresolved scales is a large, incomplete and complex topic for which there is still much to do within the Earth system modeling community 59. During the last year we worked on the following topics :

  1. Representation of penetrative convection in oceanic models    Accounting for the mean effect of subgrid scale intermittent coherent structures like convective plumes is very challenging. Currently this is done very crudely in ocean models (vertical diffusion is locally increased to “mix'’ unstable density profiles). A difficulty is that in convective conditions, turbulent fluxes are dominated by processes unrelated to local gradients, thus invalidating the usual downgradient (a.k.a. eddy-diffusion) approach. In the framework of the PhD of M. Perrot, a first step is to study the derivation of mass-flux convection schemes arising from a multi-fluid decomposition to extend them specifically to the oceanic context. This extension is done under certain “consistency" constraints: energetic considerations and scale-awareness of the resulting model 35. Reference LES simulations have been developed to guide the formulation of unknown/uncertain free parameters (coefficients or functions) in the proposed extended mass-flux scheme 36. The Bayesian calibration of such free parameters will be undertaken.
  2. Partially Lagrangian implementation of location uncertainty    Recent oceanic parameterizations "under Location Uncertainty" are based on the hypothesis that the small-scale processes are uncorrelated in time. The implementation of such parameterizations can be done in a Lagrangian manner, with rapidly moving grid points. The possibility of keeping the grid close to its original disposition was studied by S. Clement to understand how the time correlation induced by this constraint can be compensated by an Eulerian term. In particular he showed that the easy-to-implement Stochastic Grid Perturbation method 61 can be interpreted in the framework of Location Uncertainty 24.

Those topics are addressed through collaborations with the climate and operational community (Meteo-France, SHOM, Mercator-Ocean, and IGE). Two projects are currently funded, one on the energetically consistent discretization aspect (SHOM 19CP07, 2020-2025, PI: F. Lemarié) and one on the convection parameterization (Institut des Mathématiques pour la Planète Terre, 2021-2024, PIs: F. Lemarié and G. Madec). Furthermore, the Airsea team is involved in the PLUME ANR project. One of the objectives of this project is to use LES numerical simulations and laboratory experiments of deep convection to calibrate and evaluate physical parameterizations like the one developed in 36. Furthermore, the Airsea team has been involved in organizing a workshop, funded by LEFE, on the "representation of fine oceanic scales in Numerical Simulations" which took place in Brest in Oct. 2024. The workshop was at the interface of geophysical fluid dynamics and numerical techniques, as the two are deeply interconnected. The workshop aimed to collectively address the scientific challenges and future stakes related to the following topics, which are at the crossroads of geophysical fluid dynamics and numerical techniques: review of parameterizations widely used in our fields, continuum between resolved and subgrid scales in ocean models, exploitation of the model hierarchy operating at varying spatial resolutions for the calibration of deterministic or stochastic parameterizations, and techniques for parameter calibration and quantification of uncertainties in parameterizations.

7.2 Model Reduction & Multifidelity Methods

7.2.1 Model Reduction

Participants: Clémentine Prieur, Katarina Radišić, Romain Verdière, Arthur Vidard, Olivier Zahm.

When numerical models are too costly to evaluate, it is common to address the task of uncertainty quantification using an approximate model, which is faster to compute. However, constructing such a reduced model (or surrogate) is challenging due to the high number of variables involved. It is therefore crucial to identify the input variables that are most important for building the reduced model.

In 68, we propose a nonlinear dimensionality reduction method that leverages gradient evaluations of the model. Similar to prior work 48, the objective is to align the Jacobian of the feature map (a nonlinear function that extracts the key components of the parameters) with the model's gradients. Our main contribution is to use feature maps defined as the first components of a diffeomorphism from d to d, parameterized by a Coupling Flows neural network. This architecture preserves essential properties of the feature map, notably ensuring that its level sets remain simply connected. In addition, we propose a dimension augmentation trick to increase the approximation power of feature detection. A generalization to vector-valued functions demonstrate that our methodology directly applies to learning autoencoders, showing the versatility of our proposed framework.

Complementarily in Billaud-Friess et al. 1 we introduced a probabilistic Reduced Basis Method (RBM) for the approximation of a family of parameter-dependent functions, that leverages a probabilistic greedy algorithm with Monte Carlo estimates. This method incorporates an error indicator expressed as the expectation of a parameter-dependent random variable, enabling high-probability weak greedy algorithm performance. Designed for noisy, parameter-dependent evaluations, the approach finds practical application in approximating solution manifolds of parameterized PDEs, interpreted probabilistically through the Feynman-Kac formula.

On a related topic, in her PhD work, Katarina Radišić conducted an in-depth investigation into the use of stochastic polynomial chaos expansion, where the coefficients of the polynomial basis are themselves treated as random variables. This approach allows for an efficient representation of external uncertainties while retaining the computational efficiency of a surrogate emulator. Such a framework proves particularly advantageous for problems involving complex systems with significant variability in external conditions. This methodology was adaptated to a hydrological and pesticide transfer model, where the primary external uncertainty arose from variability in rainfall. By incorporating this uncertainty directly into the stochastic framework, the model achieved a flexible representation of system behavior under varying environmental conditions. This was in turn used extensively in the context of sensitivity analysis and for robust parameter estimation, see (7.3.1, 7.3.4).

7.2.2 Multifidelity

Participants: Elise Arnaud, Hélène Hénon, Angélique Saillet, Martin Schreiber, Arthur Vidard, Olivier Zahm, Benjamin Zanger.

Multifidelity methods seek to balance the computational load across a hierarchy of models with varying accuracy and evaluation cost, using lower-fidelity models for inexpensive approximations and higher-fidelity ones only when necessary. By integrating information across fidelities, it ensures better performance for complex tasks. The efficiency of such an approach, however, relies on our ability to decide on how to allocate resources on the different level of accuracy in order to reduce overall computation while maintaining accuracy.

  1. Sampling

    In 43, we adapt the concept of multifidelity to sample from complex densities, such as posterior densities in Bayesian inference. The idea is to introduce a sequence of densities with intermediate complexity, ranging from a simple density to sample from (= prior) to the target density (= posterior). We then learn each density sequentially by preconditioning the n-th step with the transport map that couples a reference measure (e.g., Gaussian) with the (n-1)-th density. This reduces the complexity of each step, significantly improving numerical performance compared to a direct approach. The paper 43 also addresses the choice of intermediate densities (ie, the multifidelity) based on the problem at hand: tempering/annealing for Bayesian inference tasks and diffusion-based models for densities that are accessible only from data.

  2. Variational data assimilation

    Incremental Variational Data Assimilation addresses the non-linear least-square optimization challenges inherent in variational data assimilation by minimizing a sequence of linear least-squares cost functions iteratively. However, due to the high dimensionality and ill-conditioning of the associated problems, the computational burden can become prohibitive. To address these challenges, we propose two approaches leveraging multifidelity methods and machine learning to improve efficiency and reduce computational costs.

    The first approach, explored in the context of Hélène Hénon's PhD research, investigates the application of multifidelity strategies to tackle the linear problem. Two specific strategies have been proposed. The first strategy employs inexact conjugate gradient methods, allowing for convergence results while respecting a controlled inaccuracy budget, which directly reduces computational time. The second strategy uses local multifidelity corrections through trust-region optimization, enabling dynamic adjustments to fidelity levels based on the local problem characteristics, thereby improving the overall efficiency of the iterative process.

    The second approach addresses the challenge of performing repeated inversions of large and potentially ill-conditioned matrices, which is a bottleneck in the optimization process. To improve the convergence rate and alleviate the computational burden, we propose utilizing Deep Neural Networks to construct a preconditioner. This preconditioner is trained on properties derived from the singular value decomposition of the matrices, ensuring it effectively mitigates the effects of ill-conditioning. To further optimize resource utilization, the training dataset is designed to be constructed dynamically during the optimization process, thereby reducing storage requirements. This work is detailed in a preprint 41.

  3. Exploration of climate scenarios

    Numerical models are important tools for predicting climate change and helping policy-makers to make decisions (e.g. in terms of protecting marine areas, land use or defining fishing quotas). The huge complexity of models and the generally very high cost of numerical simulations make an exhaustive exploration of the parameter space, corresponding to all possible scenarios and all the model's internal options, completely illusory. The idea is therefore to use statistical tools for the design of experiments. These tools enable us the parameter combinations that provide the most information on a given quantity of interest (QoI - e.g. an ecosystem health indicator) calculated from the simulation carried out. The design of experiments also has the advantage of being able to be built adaptively, in order to take into account the results of pre-existing simulations, performed with various models, under various scenarios. In Angelique Saillet's PhD, we aim at developing methodologies to address these issues, exploiting theoretical tools such as sequential design of experiments, enrichment strategies, and multi-fidelity Gaussian process regression. This work is carried out in the context of a marine biogeochemistry model, in collaboration with M. Baklouti (MIO Marseille). A sensitivity analysis has been performed on a 1D (vertical) version of the model, in order to identify the parameters that are most influential for certain quantities of interest (QoI). This enables the construction of meta-models for these QoI, thanks to the use of Gaussian processes. The next step will be to investigate how the use of additional “low-fidelity’' simulations performed with one or several degraded versions of the model can improve these metamodels.

7.3 Dealing with Uncertainties

7.3.1 Sensitivity Analysis

Participants: Alexis Anagostakis, Elise Arnaud, Qiao Chen, Clémentine Prieur, Katarina Radišić, Arthur Vidard, Ri Wang, Olivier Zahm.

Sensitivity analysis is a crucial step in uncertainty quantification as it helps identifying which input variables most influence the variability in a model's output. This understanding guides model simplification, parameter prioritization, and robust decision-making. Our research results this year can be organized around three main axes :

  1. GSA for non-classical problem settings

    The classical framework for GSA assumes that the model is a deterministic function of an input random vector with independent components. However, real-world applications often deviate from this assumption.

    • In 2, we tackled the challenge of estimating total effects under constrained feature settings. Our approach addressed feature correlations and non-Cartesian domains through experimental designs and computational techniques.
    • In 42 we introduced a Quantile-Oriented Sensitivity Analysis (QOSA) framework, leveraging random forests with pinball loss to address biases and improve accuracy in high-dimensional input spaces.
    • Traditional GSA often neglects natural variability in forcing conditions, limiting result validity. In 37, we treat Sobol’ indices as random variables influenced by forcing variability and estimate them efficiently using stochastic polynomial chaos expansions. Applying this to a hydrological model, we found parameter rankings vary with forcing, and proposed an aggregated sensitivity index to enhance GSA robustness and decision reliability.
    • In 26, we introduced novel estimators for Sobol’ indices that require only a single input/output samples. Numerical results demonstrate that these estimators enhance computational efficiency while maintaining robustness and asymptotic efficiency.
  2. Gradient-based methods

    When model gradients are available, gradient-based sensitivity methods offer a convenient and efficient alternative to traditional approaches. Over the past year, we have proposed several improvement of such gradient-based sensitivity analysis:

    • Coupled input-output dimension reduction for vector valued model 23. Unlike conventional approaches that address input or output reduction separately, our coupled approach selects the most relevant components for tasks like sensor placement and sensitivity analysis. By optimizing gradient-based bounds, we efficiently identify key inputs and outputs, avoiding costly combinatorial optimization.
    • Certified coordinate selection method for Bayesian inverse problem with heavy tail prior (Laplace), with application in imaging application 27, 54. We reduce the parameter to a small number of relevant coordinates contributing most to the prior-to-posterior update, identified via a gradient-based diagnostic. For linear forward models with Gaussian likelihoods, tractable methods estimate the diagnostic before solving the problem, enabling efficient posterior inference. Applications include 1D signal deblurring and high-dimensional 2D super-resolution, with improved sampling using specialized MCMC methods.
  3. GSA for epidemic models

    In 5, we applied global sensitivity analysis to a SARS-CoV-2 epidemic model based on continuous-time Markov chains. This work quantified uncertainty arising from both parameter values and intrinsic randomness, while exploring how different simulation algorithms impact sensitivity results. Building on these efforts, 30 extended the Sellke construction to non-Markovian models, enabling sensitivity analyses for more complex epidemic frameworks like the SEI1I2RS model. This generalisation expanded sensitivity methods to include both stochastic and epistemic uncertainties across a wide range of compartmental models.

7.3.2 Bayesian inversion

Participants: Adama Barry, Clément Duhamel, Clémentine Prieur.

Bayesian inverse problems become challenging when computational models are expensive, as the repeated evaluations required for inference (e.g., in Markov Chain Monte Carlo) become infeasible. Additionally, complex priors, such as those with heavy tails or multimodal distributions, complicate sampling and convergence, making efficient exploration of the posterior space significantly harder.

In 27, we study Bayesian inverse problems with Gaussian likelihood, linear forward models, and priors represented as Gaussian mixtures. The posterior also forms a Gaussian mixture, and we derive a closed-form expression for the posterior mixing density. To efficiently sample from this posterior, we propose a two-step method using dimension-reduced approximations for the mixing density, which maintain accuracy while ensuring low correlation in the generated samples.

In 19 we propose a hybrid approach, using a Gaussian process emulator to optimize physical experimental designs and sequentially refining numerical experiments to improve calibration. New criteria inspired by the Sequential Uncertainty Reduction (SUR) paradigm for experiment selection and performance comparisons demonstrated effectiveness on test cases, including a harmonic oscillator calibration.

For practical applications, the definition of the space the set of feasible model input values is generally not trivial. During Clément Duhamel's PhD, we have focused on the estimation of a feasible set defined by an inequality constraint on the output of a time-consuming black-box simulator. More precisely, we have proposed in 52 a new criterion for bayesian active learning for the estimation of such a feasible set, for a black-box simulator with scalar outputs. In an ongoing work (to be submitted soon), we extend this work to the framework of vector-valued black-box emulators.

7.3.3 Sampling algorithms

Participants: Elise Arnaud, Qiao Chen, Rafael Flock, Martin Schreiber, Olivier Zahm, Benjamin Zanger.

Sampling from high-dimensional distributions that are multi-modal and/or heavy-tailed poses significant challenges across various fields. This is particularly true in large-scale Bayesian inference with sparsity-inducing priors, but also, for example, in molecular dynamics, where the Boltzmann distribution of a molecular system is highly multimodal, each of the mode corresponding to a distinct physical conformation. Conventional sampling methods such as Markov Chain Monte Carlo (MCMC) face difficulties in accurately exploring the entire landscape (modes, tails, etc) of the target distribution, resulting in poor numerical performances. We have made diverse contributions which aim at addressing these challenges, focusing on a range of applications including imaging, epidemiology, and also on providing proof-of-concepts for certain advanced pioneering algorithms. These contributions involve the development of

  • a.
    dimension reduction techniques 6, 27, 23, 54
  • b.
    transport maps methods 43
  • c.
    preconditioners for stochastic differential equations (SDE) in order to enhance the convergence properties of sampling algorithms 25.

7.3.4 Robust inversion

Participants: Elise Arnaud, Exaucé Luweh Adjim Ngarti, Katarina Radišić, Arthur Vidard.

Estimating key parameters in numerical models is a crucial aspect in numerical simulation, particularly when these parameters are not directly observable. Traditional estimation methods infer parameters indirectly from their effects on observable variables, introducing inherent uncertainties. In addition to the parameters to be estimated, numerical models often include uncertain and uncontrollable nuisance parameters, which can further complicate the estimation process.

In Exaucé Ngarti’s PhD research, we investigate extending variational inference to account for the presence of nuisance parameters. Ignoring the stochastic nature of these nuisance parameters can lead to suboptimal parameter estimation due to error compensation effects. To address this, we model nuisance parameters as random variables, redefining the numerical model itself as a random variable. This problem is formulated within a Bayesian framework, where the goal is to estimate the posterior distribution by minimizing the Kullback-Leibler divergence over a family of parameterized distributions. To increase the flexibility of this approach, we integrate generative neural networks, such as normalizing flows, to enhance the expressiveness of the posterior approximation. These methods are applied to the shallow water model to estimate the friction coefficient, a critical parameter in coastal modeling that characterizes the seabed's roughness. In this context, the nuisance parameters represent boundary forcing effects driven by tidal frequencies and amplitudes.

In Katarina Radišić’s PhD research, we explore an alternative approach based on optimal control. Here, the parameter estimation problem is addressed by minimising an objective function. When nuisance parameters are present, the objective function becomes a random variable, adding a layer of complexity. To efficiently handle this uncertainty, we employ stochastic polynomial chaos expansion as a surrogate for the "random" objective function. This technique enables effective exploration of the uncertain parameter space and facilitates the computation of robust parameter estimates. The methodology has been successfully applied to a hydrology and pesticide transfer model, demonstrating its feasibility. A publication detailing this work is currently in preparation.

7.4 High performance computing

7.4.1 Dynamic compute-resource utilization

Participants: Martin Schreiber.

The way how applications are executed on supercomputers still follows a traditional static resource allocation pattern: Computing resources are allocated at the start of a job which executes the application and are only released at the end of the job's runtime. This still follows the way of running jobs since decades where a dynamic resource allocation over the application's runtime would lead to several benefits: higher utilization of the computing resources, ad-hoc allocation of AI accelerator cards, less energy consumption, faster response for interactive jobs, improved data locality and I/O over the full runtime, support of urgent computing without necessarily killing running jobs, etc. Various attempts have been conducted under different terminologies used such as “evolving” jobs (application-driven dynamic resource changes) and “malleability” (system-driven dynamic resource changes) where we see a hybridization of them required for reaching optimal results.

We currently investigate possibilities to extend the MPI parallel programming model (which is the de-facto standard in HPC) and also models beyond MPI with such interfaces. As part of that, we participate in regular MPI Session working group meetings where certain effort has been done to investigate such interfaces. Based on previous work 53, 58, 57 we finally wrote the paper “Design Principles of Dynamic Resource Management for High-Performance Parallel Programming Models” 29 explaining the basic requirements and rationality of what a generic dynamic resource APIs would need to support. Together with other EuroHPC projects, we also worked on a state-of-the-art paper on “Malleability in Modern HPC Systems: Current Experiences, Challenges, and Future Opportunities”1.

External collaborators: Dominik Huber (TUM, DataMOVE), Pierre-François Dutot (DataMOVE), Olivier Richard (DataMOVE), Howard Pritchard (LANL), Martin Schulz (TUM)

7.4.2 Hardware-aware numerics

Participants: Hugo Brunie, Laurent Debreu, Julien Remy, Martin Schreiber.

We made further progress toward a Domain-Specific Language (DSL) for the NEMO and CROCO ocean simulation model to close the increasing gap between the numerics and HPC with a separation of concerns. Such a DSL would allow applied mathematicians to express their model equations in a high-level language and HPC experts to develop tools to automatically transform this into highly performing code on the target programming model and corresponding architecture. For our DSL, we work with PSyclone, a code generation and transformation system for Fortran-based developments, and we would like to gratefully acknowledge various developments (partly unpublished) of PSyclone developers that accelerated our success.

The Poseidon project has been kicked off, working on the vision to push the performance of the above-mentioned ocean models to the HPC limit with in-depth optimizations that can't be done with currently existing compilers and to simplify the development of highly performing code for model developers.

The underlying idea is to uplift the fluid dynamics equation solver to a DSL-like intermediate representation (IR). This IR is based on a hypergraph with nodes representing computations and (hyper)edges the data flow. The strong formalism of the IR representation forms the foundation for performing the required HPC optimization, and all transformations are exclusively done in this representation. Poseidon supports writing back code to the original ocean model; hence, it doesn't require using a different development which would require disruptive changes.

Based on the extracted barotropic solver of the CROCO model, our first HPC results are to perform a fully automatic kernel fusion. This fusion already led to a substantial reduction of memory access, leading to speedups of up to 2.9.

Further work has been conducted in collaboration with Anna Mittermair & Martin Schulz (Technical University of Munich) on optimizing the communication for distributed memory systems. Former work (Master's thesis by Anna Mittermair (See 2023 MA anna mittermair.pdf) showed that obsolete communications can be detected, but that automatically inferring the required communication (as the fundament of optimizing for it) with existing means of Psyclone is hardly feasible in a reliable way. Based on Poseidon and the very formal hypergraph IR, we can already automatically inject nodes into the hypergraph to perform automatic MPI communication. The current work-in-progress is on automatically inferring optimal communication, realizing its backend to write back code to an ocean model, and working on further enhancements.

We also investigate data assimilation requiring the computation of adjoints of time steps with automatic differentiation. Realizing this by hand is a non-trivial process and requires maintenance by an expert every time the numerics are updated. Also, using automatic tools results in functionally correct differentiated code but generates code lacking various potentials for optimization. Based on the previous experience with PSyclone for automatically generating code for adjoints as a proof-of-concept for simple ODE examples (psyclone-autodiff.readthedocs.io), we can now apply a first prototype of the tangent computation based on the Poseidon hypergraph IR. The adjoint is currently work-in-progress.

Finally, explaining HPC optimization (e.g., kernel-fusion) to those unfamiliar with standard HPC optimizations is often challenging. To also reach out to such non-HPC experts, e.g., as part of open days, we are currently developing a web browser visualization in collaboration with TUM to explore and dynamically change the hypergraph (e.g., by applying hypergraph operations such as kernel fusion), including visualization of its performance change.

External collaborators: Anna Mittermair (TUM), Rupert Ford (STFC), Andrew Porter (STFC), Sergi Siso (STFC), Jörg Heinrichs (ABOM)

7.4.3 New time-integration methods

Participants: Martin Schreiber.

We investigate new time-integration methods by considering HPC requirements, targeting a better wallclock time vs.  error ratio. These numerical methods include exponential, semi-Lagrangian, and parallel-in-time integration methods.

We explored and extended semi-Lagrangian exponential methods, which integrate stiff linear terms with exponential time integration and handle nonlinear advection using a semi-Lagrangian approach. These techniques are relevant for partial differential equations found in atmospheric models. A truncation error analysis reveals that existing methods are limited to first-order accuracy due to linear term discretization. To address this, we develop a second-order scheme. Stability comparisons between various Eulerian and semi-Lagrangian exponential methods and a widely used semi-Lagrangian semi-implicit method are conducted. Numerical tests on shallow-water equations confirm the proposed method's improved stability and accuracy, albeit with higher computational costs. However, its stability and cost are comparable to the semi-implicit method, making it a competitive option for atmospheric modeling 2. This forms an extremely important part for future work on parallel-in-time methods.

Spectral Deferred Correction (SDC) methods show promise for atmospheric modeling in parallel computing environments. This study focuses on applying ETDSDC, a novel SDC family with strong stability and efficiency, to weather simulation. Using the SWEET academic HPC solver, ETDSDC is tested on Dahlquist’s equation and the Shallow Water Equations with the Williamson and Galewsky cases. Our findings show that ETDSDC demonstrates superior convergence rates, including super-convergence, outperforming methods like LRK, ETDRK, and IMEXSDC, particularly in higher-order formulations. However, stability challenges arise with high local gradients and spectral discontinuities. Additionally, the Lawson Runge-Kutta (LRK) scheme is unexpectedly the most runtime-efficient one 3.

External collaborators: Pedro S. Peixoto (USP), João C. Steinstraesser (USP), Elizaveta Boriskova (TUM)

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

  • Consortium CIROQUO – Consortium Industrie Recherche pour l’Optimisation et la QUantification d’incertitude pour les données Onéreuses – gathers academical and technological partners to work on problems related to the exploitation of numerical simulators. This Consortium, created in January 2021, is the continuation of the projects DICE, ReDICE and OQUAIDO which respectively covered the periods 2006-2009, 2011-2015 and 2015-2020. CIROQUO will be continued from 2025 as CIROQUO 2 with new industrial partners such as EDF or Michelin ciroquo.ec-lyon.fr.
  • A 5-year contract (started in January 2020) 19CP07 with the Oceanographic and Hydrographic Service of the French Navy (SHOM) on the topic « Analyse numérique pour la réconciliation en espace et en temps des discrétisations des échanges air-mer et leur paramétrisation. Application à des cas simplifiés et réalistes couplés.» (PI: F. Lemarié).

8.2 Bilateral grants with industry

  • Funding of Exaucé Luweh Adjim Ngarti’s PhD with a CIFRE contract with Eviden. PdD subject: Deep learning for inverse problem in geophysics.
  • Funding of Clément Duhamel’s PhD (sept. 2020-nov. 2024) by IFP Energies Nouvelles (IFPEN) within the framework of IFPEN and Inria strategic partnership. PhD subject: Gaussian processes-based excursion set estimation for scalar or vector black box functions. Application to the calibration of a numerical wind turbine simulator.
  • Funding of Lorenzo Calzolari’s PhD (nov. 2024-…) by IFP Energies Nouvelles (IFPEN). PhD subject: Active learning with functional inputs: application to wind turbine reliability design.

9 Partnerships and cooperations

9.1 International initiatives

9.1.1 Associate Teams in the framework of an Inria International Lab or in the framework of an Inria International Program

Crocodiles (team.inria.fr/crocodiles/):

Optimization of PDE solvers is one of the big challenges in High-Performance Computing (HPC). This requires not only skills and a deeper understanding of HPC from all the hardware and software layers but also research on software solutions that are sustainable and accepted by the developers and users of these solvers.

This associate team brings together members of ANL and the Inria Airsea team who are both currently working on the HPC modernization of models under the aforementioned constraints. This allows us to share, on the one hand, our experience and plans with the model developments. On the other hand, we can strongly benefit from the experience of all the current developments, which share many similarities.

We identified parts of mutual interests such as sharing the modularization concepts of software developed at ANL and the code analysis (based on Psyclone and also Poseidon) developed by the Airsea team. Future work will be investigating the code analysis with Psyclone on their Flash-X code.

9.1.2 STIC/MATH/CLIMAT AmSud projects

SMILE

Participants: Clémentine Prieur, Alexis Anagnostakis.

  • Title:
    Statistical modeling, nonparametric inference and model selection for complex data
  • Program:
    MATH-AmSud
  • Duration:
    January 1, 2024 –  December 31, 2025
  • Local supervisor:
    Clementine Prieur
  • Partners:
    • Meza Becerra (Chili)
    • Jose R. Leon (Uruguay)
  • Inria contact:
    Clementine Prieur
  • Summary:
    Statistical modelling for complex data is an important framework for analyzing data in fields such as ecology, meteorology, health, and telecommunications. These models are used to model population dynamics, animal movement, longitudinal data, spatial-temporal analysis, or Poisson processes. In this proposal, we are interested in to propose novel estimation procedures in this kind of complex data, considering restricted data (for instance, data on compact domain or longitudinal compositional data), spatial weighted regression, and model selection with weakly dependent observations and non-homogeneous Poisson processes. We will use parametric and nonparametric strategies.

9.1.3 Participation in other International Programs

A Comprehensive Software Stack for Dynamic Resources Management

Participants: Sergio Iserte, Dominik Huber, Martin Schreiber, Pierre-François Dutot, Olivier Richard, Antonio J. Peña.

  • Title:
    A Comprehensive Software Stack for Dynamic Resources Management
  • Partner Institution(s):
    BSC, Inria
  • Date/Duration:
    2024-
  • Additionnal info/keywords:
    dynamic resource management

9.2 International research visitors

9.2.1 Visits of international scientists

Jose R. Leon
  • Status
    (researcher)
  • Institution of origin:
    Universidad de la Republica, Montevideo
  • Country:
    Uruguay
  • Dates:
    January 2024 and December 2024
  • Context of the visit:
    SMILE project
  • Mobility program/type of mobility:
    research stay
Valentina Schüller
  • Status
    PhD
  • Institution of origin:
    Lund University
  • Country:
    Sweden
  • Dates:
    December 16-20
  • Context of the visit:
    collaboration on ocean - sea ice - atmosphere coupling
  • Mobility program/type of mobility:
    research stay

9.3 European initiatives

9.3.1 Other european programs/initiatives

  • Program: CMEMS
    • Project acronym:
      ENMASSE
    • Project title:
      Enhancing Nemo for Marine Applications and Services
    • Coordinator:
      F. Lemarié
    • Duration:
      Dec. 2024 - Dec. 2027.
    • Other partners:
      CMCC (Italy), Sorbonne Université, MetOffice (UK), National Oceanography Center (UK), STFC Hartree Centre (UK), Datlas (FR).
    • Abstract:
      The Enhancing NEMO for Marine Applications and Services (ENMASSE) project represents a pivotal initiative aimed at advancing the capabilities of the NEMO (Nucleus for European Modelling of the Ocean) modelling platform. This enhancement is designed to address specific scientific and operational requirements set by the Copernicus Marine Service (CMS) program for the development and delivery of more precise and sophisticated ocean modelling products. These products are intended to support a wide range of applications, including marine safety, climate prediction, and ecosystem monitoring, ultimately contributing to informed decision-making and sustainable ocean management.
  • Program: C3S2
    • Project acronym:
      ERGO2
    • Project title:
      Advancing ocean data assimilation methodology for climate applications
    • Duration:
      August 2022 - July 2025
    • Coordinator:
      Arthur Vidard
    • Other partners:
      Cerfacs (France), CNR (Italy)
    • Abstract:
      The scope of this contract is to improve ocean data assimilation capabilities at ECMWF, used in both initialization of seasonal forecasts and generation of coupled Earth System reanalyses. In particular it shall focus on i) improving ensemble capabilities in NEMO and NEMOVAR and the use of their information to represent background error statistics; ii) extend NEMOVAR capabilities to allow for multiple resolution in multi-incremental 3D-Var; iii) make better use of ocean surface observations. It shall also involve performing scout experiments and providing relevant diagnostics to evaluate the benefit coming from the proposed developments.

9.4 National initiatives

9.4.1 ANR

  • A 4-year contract: ANR MOTIONS (Multiscale Oceanic simulaTIONS based on mesh refinement strategies with local adaptation of dynamics and physics).
    • PI:
      F. Lemarié
    • Duration:
      Jan. 2024 - Dec. 2027.
    • Other partners:
      • Laboratoire d’Aérologie, UMR 5560 (LAERO),
      • Service Hydrographique et Océanographique de la Marine (SHOM),
      • Institut Camille Jordan,
      • UMR5208 (ICJ),
      • Laboratoire d’Etudes en Géophysique et Océanographie Spatiales,
      • UMR5566 (LEGOS).
    • Abstract:
      The MOTIONS project aims at delivering robust and efficient numerical algorithms allowing an innovative multiscale modeling strategy based on block-structured mesh refinement with local adaptation of model equations, numerics and physics in selected areas of interest. The target application to evaluate numerical developments is the simulation of important fine-scale non-hydrostatic processes and their feedback to larger scales within the Mediterranean / North-East Atlantic dynamical continuum.
  • A 4-year contract: ANR PLUME (Observation and Parameterization of Oceanic Convection).
    • PI:
      B. Deremble (CNRS), Inria PI: F. Lemarié.
    • Objectives:
      1. build a consistent database of convective events (both in the lab and with a numerical model) in order to calibrate free parameters of parameterizations of deep convection.
      2. characterize the structure of the thermal plumes in a well defined parameter space that characterizes the rotating/non rotating state and forced vs free convection
      3. use a data driven approach to formulate a model of convection without any preconceived bias about the mathematical formulation.
  • Several team members also participate to PEPR NUMPEX and MathsVives

9.4.2 Inria Challenge

  • Sea Uncertainty Representation and Forecast (SURF),
  • Coordinator: Airsea (A. Vidard),
  • Partenaires Inria: Ange, Cardamom, Fluminance, Lemon, Mingus, Defi
  • Partenaires extérieurs: BRGM, Ifremer, SHOM

9.4.3 Other Initiatives

  • E. Blayo is co-advising the PhD thesis of Valentin Bellemin-Laponnaz with IGE Lab, in the framework of the NASA-CNES working group on the SWOT satellite.
  • A 3-year contract (started in Sept 2021) funded by the Institut des Mathématiques pour la Planète Terre (IMPT) on the topic "Modélisation cohérente des échelles sous-maille pour les modèles océaniques de climat" (PI: F. Lemarié).

10 Dissemination

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

The Airsea team organized the 3rd SDC days in Grenoble: sdc2024.sciencesconf.org/

The age of exascale computers brought new challenges to computational science, emphasizing the efficient use of modern high-performance computing systems. The shift to massive parallelism posed significant difficulties in designing numerical algorithms. This demand was particularly pressing in fields requiring simulations of systems evolving from initial conditions. Spectral Deferred Correction (SDC) methods, an iterative solver framework, provided higher-order temporal integrators for ODEs and PDEs with various options for parallelization in time. The meeting aimed to promote exchange on advanced time integrators, covering analytical and numerical analyses, as well as software and implementation aspects.

General chair, scientific chair
  •   Clémentine Prieur is the general chair of next SAMO conference (to be hold in Genoble in 2025).
Member of the organizing committees

10.1.2 Scientific events: selection

Member of the conference program committees
  • F. Lemarié was part of the organizational and scientific committees of the joint CLIVAR Ocean Model Development Panel and COMMODORE Workshop on Ocean Model Development, Data-driven Parameterizations, and Machine Learning in Ocean Models of the Earth System which took place at NCAR Mesa Lab in Boulder, Colorado (9-12 September 2024) [OMDP-COMMODORE workshop]. The workshop was sponsored by CLIVAR-OMDP, US-CLIVAR, and NSF-NCAR.
  • Clémentine Prieur was a member of the scientific committee of the 2024 edition annual conference of the CNRS RT Quantification d’Incertitudes (organized by INRIA Sophia-Antipolis).
  • Clémentine Prieur was a member of the scientific committee of the 2024 edition of the JDS (Journées de Statistique organized by SFdS).
  • Martin Schreiber, International Conference on Computational Science (ICCS)
  • Martin Schreiber, Co-Chair of Proceedings of Euro-Par Conference (EUROPAR)
  • Martin Schreiber, EuroMPI
  • Martin Schreiber, SDC Days
  • Martin Schreiber, International Workshop on OpenCL (IWOCL)

10.1.3 Journal

Member of the editorial boards
  • F. Lemarié is associate editor of the Journal of Advances in Modeling Earth Systems (JAMES)
  • M.Schreiber participated in the edition of Proceedings of EuroPAR 2024
  • Clémentine Prieur was associate editor for Computational and Applied Mathematics journal (2017-2024).
  • Clémentine Prieur is associate editor for SIAM/ASA Journal of Uncertainty Quantification journal.
  • Clémentine Prieur is a member of the reading committee of Annales Mathématiques Blaise Pascal.
Reviewer - reviewing activities
  • Permanent members of the Airsea team are regular reviewers for numerous international journals.

10.1.4 Invited talks

  • E. Blayo was the invited speaker of the first Plugin seminar at Inria Grenoble, March, 19, 2024.
  • M.Schreiber was the invited speaker at Geneve University: Breaking through the barrier of time integration for climate and weather simulations, Dec. 2024
  • M.Schreiber was the invited speaker of the HPC Seminar @ Bordeaux: Perspectives of weather and climate simulations on next-generation HPC systems, Bordeaux (virtual), May 2024
  • M.Schreiber was the invited speaker of the NHR PerfLab Seminar: Breaking through the barrier of time integration for climate and weather simulations, Erlangen, Mar. 2024

10.1.5 Leadership within the scientific community

  • F. Lemarié is member of the international CLIVAR Ocean Model Development Panel since Jan. 2024. CLIVAR (Climate and Ocean: Variability, Predictability, and Change) is a core project of the World Climate Research Program (WCRP).
  • F. Lemarié is member of the scientific board of the GDR "Défis théorique pour les sciences du climat" (since Dec. 2024)
  • C. Prieur is the ongoing president of SAMO group, international research group on sensitivity analysis.

10.1.6 Scientific expertise

  • E. Blayo is a member of the scientific committee of IMPT (Institut Mathématique pour la Planète Terre)
  • E. Blayo is a member of the scientific committee of the Labex Persyval-3
  • F. Lemarié is the co-leader with Sybille Téchené (CNRS) and Mike Bell (UK Met Office) of the NEMO (www.nemo-ocean.eu) Working Group on numerical kernel development.
  • F. Lemarié is a member of the CROCO (www.croco-ocean.org) scientific committee in charge of the "numerical methods" topic.
  • Martin Schreiber is a member of the MPI Forum (representing the University Grenoble Alpes) and in particular active in the MPI Session workgroup.
  • Martin Schreiber is a member of the OpenMP ARB (representing Inria).
  • Clémentine Prieur is advisor to the scientific council of IFPEN.

10.1.7 Research administration

  • E. Blayo is a deputy director of the Jean Kuntzmann Lab.
  • Clémentine Prieur is Vice President of the french society of statistics (SFdS) since July 2024.
  • Clémentine Prieur is currently a member of the Executive and Scientific Committees of the RT Quantification d’Incertitudes (RT2172 funded by INSMI @ CNRS) which she chaired during the period 2010-2017.
  • Clémentine Prieur is currently a member of the Scientific Committee of the RT Terre et Energies (RT2166 funded by INSMI @ CNRS).
  • Clémentine Prieur is local correspondent in Grenoble for the Mathematical Society of France (SMF).
  • Clémentine Prieur was a member of the MSTIC pole council of UGA(nov. 2020-oct. 2024).
  • Clémentine Prieur is responsible for the applied math specialty for doctoral school MSTII edmstii.univ-grenoble-alpes.fr
  • E. Arnaud is in charge of the parity diversity commission at Jean Kuntzmann Lab
  • F. Lemarié is member of the local Inria ”comité des emplois scientifiques” (CES), responsible for evaluating delegation requests and awarding Inria postdoc fellowships (since Sept 2019).
  • F. Lemarié is the Inria local scientific correspondent for national calls for projects (work in coordination with Inria contract managers to identify national project calls and the teams that may be suited to respond to these calls. As part of this mission, I provide support to (1) analyze the objectives of project calls to assess their relevance. (2) track trends and innovations in the relevant sector). (since July 2020)

10.2 Teaching - Supervision - Juries

10.2.1 Teaching

  • Licence: E. Arnaud, Mathematics for engineers, 50h, L2, UGA, France
  • Licence: E. Arnaud, Statistics, 20h, L2, UGA, France
  • Licence: E. Blayo, analysis and algebra, 107h, L1, University Grenoble Alpes, France
  • Licence: C. Kazantsev, Mathématiques approfondies pour l'ingénieur, 36h, L2, UGA, France
  • Licence: C. Kazantsev, Mathématiques pour les sciences de l'ingénieur, 36h, L2, UGA, France
  • Licence: Martin Schreiber, Advanced Analysis & Algebra, L1, 69h, UGA, France
  • Master: E. Arnaud, Statistics, 30h, M1, UGA, France
  • Master: E. Arnaud, Supervision of student in apprenticeship, 30h, M2, UGA, France
  • Master: E. Blayo, Partial Differential Equations, 37h, M1, University Grenoble Alpes, France
  • Master: E. Blayo, Optimal control of PDEs, 17h, M2, University Grenoble Alpes, France
  • Master: Martin Schreiber, High-Performance Computing, M1, 31.1h, UGA, France
  • Master: Martin Schreiber, Parallel Algorithms and Programming, M1, 11.25h, UGA, France
  • Master: Martin Schreiber, Object oriented programming with C++, M1, 18h, UGA, France
  • Master: Martin Schreiber, Partial differential equations, M1, 34.5h, UGA, France
  • E-learning: E. Arnaud is in charge of the pedagogic platform math@uga: implementation of a collaborative moodle platform to share pedagogical resources within teachers and towards students.
  • E. Blayo is in charge of the Ecole des Mathématiques Appliquées: organization and coordination of pedagogical and administrative aspects related to teaching for the applied maths department.

10.2.2 Supervision

  • PhD in progress: Pierre Lozano, Coupling hydrostatic and nonhydrostatic ocean circulation models. October 2022, E. Blayo and L. Debreu
  • PhD in progress: Manolis Perrot, Consistent subgrid scale modeling for oceanic climate models. October 2021, E. Blayo and F. Lemarié
  • PhD in progress: Gabriel Derrida, Design of flexible and numerically-sound generalised vertical coordinates with vertical ALE (V-ALE) algorithm for operational ocean forecasting. October 2023, L. Debreu and F. Lemarié.
  • PhD in progress: Angélique Saillet, Design of experiments, climate scenarios and ocean simulation, Octobre 2023, Eric Blayo and Élise Arnaud.
  • PhD in progress: Exaucé Luweh Adjim Ngarti, deep learning for inverse problem in geophysics, Université Grenoble-Alpes Avril 2023, E. Arnaud, L. Nicoletti (Atos) and A. Vidard.
  • PhD in progress: Julien Remy, “HPC Automatic differentiation for ocean models, since Octobre 2024, M. Schreiber and A. Vidard
  • PhD in progress: Benjamin Zanger, Compositional surrogates for reduced order modeling, since Septembre 2022, M. Schreiber, O. Zahm
  • PhD in progress: Dominik Huber, “Dynamic Resource Management with MPI Sessions and PMIx” (preliminary title), PhD candidate at the Technical University of Munich (TUM), since 2022, M. Schulz (TUM), M. Schreiber
  • PhD in progress: Keerthi Gaddameedi, “Dynamic Resource Management for Parallel-In-Time Simulations” (preliminary title), PhD candidate at the Technical University of Munich (TUM), since 2022, H.-J. Bungartz (TUM), T. Neckel (TUM), M. Schreiber
  • PhD in progress: Adama Barry, Plans d’expériences pour la calibration et la validation d’un simulateur numérique, January 2022, F. Bachoc (Institut de Mathématiques de Toulouse), C. Prieur, promoted by M. Munoz Zuniga and S. Bouquet.
  • PhD in progress: Ri Wang, Apprentissage statistique pour l’analyse de sensibilité globale avec entrées dépendantes, October 2021, V. Maume-Deschamps (Université Lyon 1), C. Prieur.
  • PhD in progress: Romain Verdière, Nonlinear dimension reduction for uncertainty quantification problems, Septembre 2022, O. Zahm
  • PhD in progress: Lorenzo Calzolari, Active learning with functional inputs: application to wind turbine reliability design, Novembre 2024, C. Helbert (Ecole Centrale Lyon), C. Prieur, promoted by M. Munoz Zuniga, D. Sinoquet (IFPEN)
  • PhD in progress: Katarina Radisic, Prise en compte d'incertitudes externes dans l'estimation de paramètres d'un modèle de transfert d'eau et de pesticides à l'échelle du bassin versant, Université Grenoble-Alpes December 2021, C. Lauvernet (Inrae) and A. Vidard.
  • PhD in progress: Hélène Hénon, Assimilation de données variationnelles multi-fidélité pour les prévisions océaniques , Université Grenoble-Alpes Octobre 2023, A. Vidard.
  • PhD: Qiao Chen, Low-dimensional structure of ocean data assimilation problems via gradient information, October 2021, defended dec 6th 2024 É. Arnaud and O. Zahm
  • PhD: Robin Vaudry, Analyse de Sensibilité, quantification d’incerTitudes et calibRAtion pour des modèles épidémioloGiques structurés, October 2021, defended oct, 24th, 2024, D. Georges (Grenoble INP), C. Prieur 15
  • PhD: Clément Duhamel, Inversion robuste d’un code de calcul prenant en entrées des données de nature fonctionnelle. Application à la conception d’éoliennes, October 2020, defended nov, 21st, 2024, C. Helbert (Ecole Centrale Lyon), C. Prieur, promoted by M. Munoz Zuniga, D. Sinoquet (IFPEN) 14
  • PostDoc: Alexis Anagnostakis, nov. 2022-nov. 2024, IRGA then Inria funding, Numerical schemes for Sensitivity Analysis of Hypocoercive Systems. C. Prieur.
  • Internship (Master 1), Julien Remy “Automatic differentiation with PSyclone”, supervision by M. Schreiber, A. Vidard and M. Brémond
  • Internship: Charley Gay, Isaora Bacquet - Conception de supports de médiation scientifique sur l’intelligence artificielle (mythes, fondements et questions éthiques)
  • Internship: Maëlys Moro (2024), ENSTA 2nd year internship, PINNs models for optimal control strategies of structured epidemiological models, C. Prieur and D. Georges (GIPSA-Lab)
  • Internship: M. Aharmouch (co-encadrement à 50 % avec C. Helbert, Ecole Centrale de Lyon), M2 internship, Active learning for Gaussian processes with functional inputs: application to wind turbine reliability design, C. Helbert, M. Munoz Zuniga, C. Prieur and D. Sinoquet (funded by IFPEN).

10.2.3 Juries

  • E. Blayo:
    • June 24, 2024: PhD thesis of Olivier Narinc, Univ. Grenoble Alpes (president)
  • Clémentine Prieur:
    • 12 déc. 2024 — PhD thesis of Noé Fellmann, Université Lyon 1 (examinatrice)
    • 12 nov. 2024 — PhD thesis of Jérémy Briant, CERFACS et Université de Toulouse (examinatrice)
    • 21 nov. 2024 — PhD thesis of Clément Duhamel, UGA (directrice de thèse)
    • 24 oct. 2024 — PhD thesis of Robin Vaudry, UGA (directrice de thèse)
    • 3 oct. 2024 — PhD thesis of Paul Lartaud, Institut Polytechnique de Paris (examinatrice)
    • 20 sept. 2024 — PhD thesis of Thi-Nguyen-Khoa Nguyen, ENS-Paris Saclay (examinatrice)
    • 10 juil. 2024 — PhD thesis of Madeleine Kubasch, Institut Polytechnique de Paris (examinatrice)
    • 26 avr. 2024 — PhD thesis of Oumar Balde, Université de Toulouse (rapportrice)
    • 31 jan. 2024 — PhD thesis of Abubakar Haruna, Université Grenoble Alpes (présidente)
    • 12 jan. 2024 — PhD thesis ofd’Eliott Lumet, Université de Toulouse (examinatrice)
  • E. Arnaud:
    • member of Inria PhD recruitment committee CORDI-S
    • CSI Jury of Sophie Mauran, PhD in progress at IRIT Toulouse
  • A.Vidard:
    • Oct. 4, 2024, PhD thesis of Zoé Lloret, Univ. Paris Saclay, "Un nouveau modèle du transport des gaz à effet de serre dans l’atmosphère mondiale adapté à l’évolution des moyens de calcul à haute performance.", (reviewer).
    • Oct. 8, 2024, PhD thesis of Mathis Peyron, Toulouse INP, "Assimilation de données en espace latent par des techniques de deep learning", (reviewer).
    • Dec. 11, 2024, PhD thesis of Louis Lamérand, Univ. Nice, "Assimilation de données pour la réduction et la calibration de modèles de transport turbulent pour la fusion par confinement magnétique", (reviewer).
  • F. Lemarié:
    • Since Nov. 2024: Member of the CE56 scientific evaluation committee of the ANR (French National Research Agency)
    • Dec 10, 2024, PhD thesis of Adrien Garinet, Univ. Toulouse III - Paul Sabatier, Mise en évidence et réduction du mélange numérique dans un modèle numérique incluant la marée interne, un cas d'étude sur l'Asie du Sud-Est
    • PhD thesis of Sofia Flora's, Università degli Studi di Trieste, Superstatistical analysis of HF Radar sea surface currents in the Gulf of Trieste and their idealized deterministic-stochastic modelling (reviewer)
  • M.Schreiber:
    • PhD thesis of Fernando Valdés Ravelo, “On explicit exponential integrators in the solution of the elastic equations for the wave propagation”, University of Sao Paulo, 2024
    • PhD thesis of Lucas Meyer, “Online Deep Learning for Numerical Simulations at Scale”, University Grenoble Alpes, 2024
    • PhD thesis of Gaston Irrmann, “Calcul Haute Performance et optimisation de simulations océanographique multi-échelles“, Sorbonne Université, 2024
    • PhD thesis of Pascal Jungblut, “Task Scheduling on FPGA-based Accelerators with Limited Partial Reconfiguration”, Ludwig-Maximilians-Universität München, 2024
    • PhD thesis of Amir Raoofy, “Time Series Mining on High Performance Computing Systems”, Technical University of Munich, 2024

10.3 Popularization

10.3.1 Specific official responsibilities in science outreach structures

  • Ch. Kazantsev and E. Blayo are president and vice president respectively of La Grange des Maths

10.3.2 Productions (articles, videos, podcasts, serious games, ...)

  • Ch. Kazantsev and E. Blayo are strongly involved in the creation and dissemination of pedagogic suitcases with mathematical activities designed for primary and secondary schools, as well as an escape game. These actions are led in the context of the association La Grange des Maths.
  • E. Blayo and A. Vidard have initiated a series of pedagogical videos on numerical ocean modeling, in collaboration with L’Esprit Sorcier TV. The teaser (available here) was launched in October, in the occasion of La Fête de la Science. The five episodes will be published online in the first half of 2025.

10.3.3 Participation in Live events

  • E. Blayo gave several outreach talks, in particular for high school students, and for more general audiences.
  • Participation to "La fête de la science" E. Arnaud, E. Blayo, H. Brunie, E. Kazantsev, A. Saillet.
  • E. Arnaud, Intelligence Artificielle, mythes et réalité - Séminar given at Lycée Villard Bonnot, june 2024
  • Clémentine Prieur is regularly taking part to meetings with high school students to promote sciences for young women.

10.3.4 Others science outreach relevant activities

  • E. Blayo was the “regional ambassador’’ of the Fête de la Science for the Auvergne-Rhône-Alpes region. In this capacity, he took part in several official events, received media coverage on social networks, and gave interviews for the written press, radio and TV.

11 Scientific production

11.1 Publications of the year

International journals

Invited conferences

  • 9 inproceedingsK.Katarina Radišić, C.Claire Lauvernet and A.Arthur Vidard. Robust calibration of a water and pesticide transfer model at the catchment scale: Consideration of forcing uncertainty on the sensitivity and calibration of a catchment-scale pesticide transfer model.mexico 2024 - Rencontres annuelles du réseau MexicoVilleurbanne, France2024HAL

International peer-reviewed conferences

  • 10 inproceedingsV.Valentin Breaz and R.Richard Wilkinson. Randomized Maximum Likelihood via High-Dimensional Bayesian Optimization.2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2024)Seoul, South KoreaIEEEMarch 2024, 5300-5304HALDOI
  • 11 inproceedingsS.Simon Clement, F.Florian Lemarié and E.Eric Blayo. Semi-discrete analysis of a simplified air-sea coupling problem with nonlinear coupling conditions.Lecture Notes in Computational Science and EngineeringDD 2022 - 27th International Domain Decomposition Conference149Lecture Notes in Computational Science and EngineeringPrague (CZ), Czech RepublicSpringer Nature Switzerland2024, 141-148HALDOIback to text
  • 12 inproceedingsA.-A.Antoine-Alexis Nasser, N. C.Nicolas C. Jourdain, P.Pierre Mathiot and G.Gurvan Madec. Benefits of the Brinkman Volume Penalisation Method for the Ice-Shelf Melt Rates Produced by Z-coordinate Ocean Models.EGU24 - European Geosciences Union General AssemblyVienne, Austria2024HALDOI

Conferences without proceedings

  • 13 inproceedingsK.Katarina Radišić, C.Claire Lauvernet and A.Arthur Vidard. Calibration robuste d'un modèle hydrologique de transfert de pesticides par métamodélisation.JMSC 2024 - 5ème édition des Journées de Modélisation des Surfaces ContinentalesStrasbourg, France2024, 1-49HAL

Doctoral dissertations and habilitation theses

  • 14 thesisC.Clément Duhamel. Gaussian processes-based excursion set estimation for scalar or vector black box functions. Application to the calibration of a numerical wind turbine simulator..Université Grenoble AlpesNovember 2024HALback to text
  • 15 thesisR.Robin Vaudry. Modeling, analysis and simulation of structured complex systems inepidemiology..Université Grenoble - AlpesOctober 2024HALback to text

Reports & preprints

Other scientific publications

Software

11.2 Cited publications

  • 47 articleF.Francis Auclair, L.Laurent Debreu, E.Emilie Duval, M.Margot Hilt, P.Patrick Marchesiello, E.Eric Blayo, F.Franck Dumas and Y.Yves Morel. Theory and analysis of acoustic-gravity waves in a free-surface compressible and stratified ocean.Ocean Modelling168December 2021, 1-20HALDOIback to text
  • 48 articleD.Daniele Bigoni, Y.Youssef Marzouk, C.Clémentine Prieur and O.Olivier Zahm. Nonlinear dimension reduction for surrogate modeling using gradient information.Information and InferenceMay 2022HALDOIback to text
  • 49 unpublishedS.Sébastien Da Veiga, F.Fabrice Gamboa, A.Agnès Lagnoux, T.Thierry Klein and C.Clémentine Prieur. Efficient estimation of Sobol' indices of any order from a single input/output sample.2024, working paper or preprintHALback to text
  • 50 articleL.Laurent Debreu, N.-R. K.Nicholas K.-R. Kevlahan and P.Patrick Marchesiello. Brinkman volume penalization for bathymetry in three-dimensional ocean models.Ocean Modelling145January 2020, 1-13HALDOIback to text
  • 51 articleL.Laurent Debreu, N.Nicholas Kevlahan and P.Patrick Marchesiello. Improved Gulf Stream separation through Brinkman penalization.Ocean Modelling179November 2022, 102121HALDOIback to text
  • 52 articleC.Clément Duhamel, C.Céline Helbert, M.Miguel Munoz Zuniga, C.Clémentine Prieur and D.Delphine Sinoquet. A SUR version of the Bichon criterion for excursion set estimation.Statistics and Computing332April 2023, 41HALDOIback to text
  • 53 inproceedingsJ.Jan Fecht, M.Martin Schreiber, M.Martin Schulz, H.Howard Pritchard and D. J.Daniel J Holmes. An Emulation Layer for Dynamic Resources with MPI Sessions.HPCMALL 2022 - Malleability Techniques Applications in High-Performance ComputingHambourg, GermanyJune 2022HALback to text
  • 54 articleR.Rafael Flock, Y.Yiqiu Dong, F.Felipe Uribe and O.Olivier Zahm. Certified coordinate selection for high-dimensional Bayesian inversion with Laplace prior.Statistics and Computing344October 2023, 134HALDOIback to textback to text
  • 55 articleM.Markus Gross, H.Hui Wan, P. J.Philip J. Rasch, P. M.Peter M. Caldwell, D. L.David L. Williamson, D.Daniel Klocke, C.Christiane Jablonowski, D. R.Diana R. Thatcher, N.Nigel Wood, M.Mike Cullen, B.Bob Beare, M.Martin Willett, F.Florian Lemarié, E.Eric Blayo, S.Sylvie Malardel, P.Piet Termonia, A.Almut Gassmann, P. H.Peter H. Lauritzen, H.Hans Johansen, C. M.Colin M. Zarzycki, K.Koichi Sakaguchi and R.Ruby Leung. Physics--Dynamics Coupling in Weather, Climate, and Earth System Models: Challenges and Recent Progress.Monthly Weather Review14611November 2018, 3505--3544HALDOIback to text
  • 56 articleM.Margaux Hilt, L.Laurent Roblou, C.Cyril Nguyen, P.Patrick Marchesiello, F.Florian Lemarié, S.Swen Jullien, F.Franck Dumas, L.Laurent Debreu, X.Xavier Capet, L.Lucie Bordois, R.Rachid Benshila and F.Francis Auclair. Numerical modeling of hydraulic control, solitary waves and primary instabilities in the Strait of Gibraltar.Ocean Modelling155July 2020, 101642HALDOIback to text
  • 57 inproceedingsD.Dominik Huber, M.Martin Schreiber and M.Martin Schulz. A Case Study on~PMIx-Usage for~Dynamic Resource Management.Lecture Notes in Computer Science13999Lecture Notes in Computer ScienceHambourg, GermanySpringer Nature SwitzerlandMay 2023, 42-55HALDOIback to text
  • 58 inproceedingsD.Dominik Huber, M.Maximilian Streubel, I.Isaías Comprés, M.Martin Schulz, M.Martin Schreiber and H.Howard Pritchard. Towards Dynamic Resource Management with MPI Sessions and PMIx.EuroMPI/USA'22: Proceedings of the 29th European MPI Users' Group MeetingEuroMPI/USA'22 - 29th European MPI Users' Group MeetingChattanooga, United StatesACMSeptember 2022, 57-67HALDOIback to text
  • 59 articleP. H.Peter H Lauritzen, N.N.K.‐r. Kevlahan, T.T. Toniazzo, C.C. Eldred, T.Thomas Dubos, A.A. Gassmann, V.V.E. Larson, C.C. Jablonowski, O.O. Guba, B.B. Shipway, B.B.E. Harrop, F.Florian Lemarié, R.R. Tailleux, A.A.R. Herrington, W.W. Large, P.P.J. Rasch, A.A.S. Donahue, H.H. Wan, A.A. Conley and J.J.T. Bacmeister. Reconciling and improving formulations for thermodynamics and conservation principles in Earth System Models (ESMs).Journal of Advances in Modeling Earth SystemsAugust 2022, 1-99HALDOIback to text
  • 60 articleF.Florian Lemarié, G.Guillaume Samson, J.-L.Jean-Luc Redelsperger, H.Hervé Giordani, T.Théo Brivoal and G.Gurvan Madec. A simplified atmospheric boundary layer model for an improved representation of air-sea interactions in eddying oceanic models: implementation and first evaluation in NEMO (4.0).Geoscientific Model Development141January 2021, 543 - 572HALDOIback to text
  • 61 articleS.S. Leroux, J.-M.J.-M. Brankart, A.A. Albert, L.L. Brodeau, J.-M.J.-M. Molines, Q.Q. Jamet, J.J. Le Sommer, T.T. Penduff and P.P. Brasseur. Ensemble quantification of short-term predictability of the ocean dynamics at a kilometric-scale resolution: a Western Mediterranean test case.Ocean Science1862022, 1619--1644URL: https://os.copernicus.org/articles/18/1619/2022/DOIback to text
  • 62 articleP.Patrick Marchesiello, J.Julien Chauchat, H.Hassan Shafiei, R.Rafael Almar, R.Rachid Benshila, F.Franck Dumas and L.Laurent Debreu. 3D wave-resolving simulation of sandbar migration.Ocean Modelling180December 2022, 102127HALDOIback to text
  • 63 articleO.Olivier Marti, S.Sébastien Nguyen, P.Pascale Braconnot, S.Sophie Valcke, F.Florian Lemarié and E.Eric Blayo. A Schwarz iterative method to evaluate ocean--atmosphere coupling schemes: implementation and diagnostics in IPSL-CM6-SW-VLR.Geoscientific Model Development Discussions145May 2021, 2959 - 2975HALDOIback to textback to text
  • 64 techreportO.Olivier Marti, S.Sébastien Nguyen, P.Pascale Braconnot, S.Sophie Valcke, F.Florian Lemarié and E.Eric Blayo. Diagnosing the ocean-atmosphere coupling schemes by using a mathematically consistent Schwarz iterative method.Research activities in Earth system modelling. Working Group on Numerical Experimentation. Report No. 51. WCRP Report No.4/2021. WMO, GenevaJuly 2021HALback to text
  • 65 phdthesisA.-A.Antoine-Alexis Nasser. Advancing the representation of flows along topography in z-coordinate ocean models.Sorbonne UniversitéSeptember 2023HALback to text
  • 66 articleA.-A.Antoine-Alexis Nasser, G.Gurvan Madec, C.Casimir de Lavergne, L.Laurent Debreu, F.Florian Lemarié and E.Eric Blayo. Sliding or stumbling on the staircase: numerics of ocean circulation along piecewise-constant coastlines.Journal of Advances in Modeling Earth Systems155May 2023, e2022MS003594HALDOIback to text
  • 67 articleS.Sophie Thery, C.Charles Pelletier, F.Florian Lemarié and E.Eric Blayo. Analysis of Schwarz Waveform Relaxation for the Coupled Ekman Boundary Layer Problem with Continuously Variable Coefficients.Numerical Algorithms89March 2022, 1145–1181HALDOIback to text
  • 68 unpublishedR.Romain Verdière, C.Clémentine Prieur and O.Olivier Zahm. Diffeomorphism-based feature learning using Poincaré inequalities on augmented input space.December 2023, working paper or preprintHALback to text