Among all aspects of geosciences, we mainly focus on gravity driven flows arising in many situations such as

There exists a strong demand from scientists and engineers in fluid mechanics for models and numerical tools able to simulate not only the water depth and the velocity field but also the distribution and evolution of external quantities such as pollutants or biological species and the interaction between flows and structures (seashores, erosion processes...). The key point of the researches carried out within ANGE is to answer this demand by the development of efficient, robust and validated models and numerical tools.

Due to the variety of applications with a wide range of spatial scales, reduced-size models like the shallow water equations are generally required. From the modelling point of view, the main issue is to describe the behaviour of the flow with a reduced-size model taking into account several physical processes such as non-hydrostatic terms, biological species evolution, topography and structure interactions within the flow. The mathematical analysis of the resulting model do not enter the field of hyperbolic equations anymore and new strategies have to be proposed. Moreover, efficient numerical resolutions of reduced-size models require particular attention due to the different time scales of the processes and in order to recover physical properties such as positivity, conservativity, entropy dissipation and equilibria.

The models can remain subject to uncertainties that originate from incomplete description of the physical processes and from uncertain parameters. Further development of the models may rely on the assimilation of observational data and the uncertainty quantification of the resulting analyses or forecasts.

The research activities carried out within the ANGE team strongly couple the development of methodological tools with applications to real–life problems and the transfer of numerical codes. The main purpose is to obtain new models adapted to the physical phenomena at stake, identify the main properties that reflect the physical meaning of the models (uniqueness, conservativity, entropy dissipation, ...), propose effective numerical methods to approximate their solution in complex configurations (multi-dimensional, unstructured meshes, well-balanced, ...) and to assess the results with data in the purpose of potentially correcting the models.

The difficulties arising in gravity driven flow studies are threefold.

Hazardous flows are complex physical phenomena that can hardly be represented by shallow water type systems of partial differential equations (PDEs). In this domain, the research program is devoted to the derivation and analysis of reduced complexity models compared to the Navier-Stokes equations, but relaxing the shallow water assumptions. The main purpose is then to obtain models well-adapted to the physical phenomena at stake.

Even if the resulting models do not strictly belong to the family of hyperbolic systems, they exhibit hyperbolic features: the analysis and discretisation techniques we intend to develop have connections with those used for hyperbolic conservation laws. It is worth noticing that the need for robust and efficient numerical procedures is reinforced by the smallness of dissipative effects in geophysical models which therefore generate singular solutions and instabilities.

On the one hand, the derivation of the Saint-Venant system from the Navier-Stokes equations is based on two approximations (the so-called shallow water assumptions), namely

As a consequence the objective is to get rid of these two assumptions, one after the other, in order to obtain models accurately approximating the incompressible Euler or Navier-Stokes equations.

On the other hand, many applications require the coupling with non-hydrodynamic equations, as in the case of micro-algae production or erosion processes. These new equations comprise non-hyperbolic features and a special analysis is needed.

As for the first shallow water assumption, multi-layer systems
were proposed to describe the flow as a superposition of Saint-Venant type systems 24, 27, 28.
Even if this approach has provided interesting results, layers are considered separate and non-miscible fluids, which implies strong limitations.
That is why we proposed a slightly different approach 25, 26 based on a Galerkin type decomposition along the vertical axis of all variables and leading, both for the model and its discretisation, to more accurate results.

A kinetic representation of our multilayer model allows to derive robust numerical schemes endowed with crucial properties such as: consistency, conservativity, positivity, preservation of equilibria, ... It is one of the major achievements of the team but it needs to be analyzed and extended in several directions namely:

The hydrostatic assumption consists in neglecting the vertical acceleration of the fluid. It is considered valid for a large class of geophysical flows but is restrictive in various situations where the dispersive effects (like wave propagation) cannot be neglected. For instance, when a wave reaches the coast, bathymetry variations give a vertical acceleration to the fluid that strongly modifies the wave characteristics and especially its height.

Processing an asymptotic expansion (w.r.t. the aspect ratio for shallow water flows) into the Navier-Stokes equations, we obtain at the leading order the Saint-Venant system. Going one step further leads to a vertically averaged version of the Euler/Navier-Stokes equations involving some non-hydrostatic terms. This model has several advantages:

The coupling of hydrodynamic equations with other equations in order to model interactions between complex systems represents an important part of the team research. More precisely, three multi-physics systems are investigated. More details about the industrial impact of these studies are presented in the following section.

In environmental applications, the most accurate numerical models remain subject to uncertainties that originate from their parameters and shortcomings in their physical formulations. It is often desirable to quantify the resulting uncertainties in a model forecast. The propagation of the uncertainties may require the generation of ensembles of simulations that ideally sample from the probability density function of the forecast variables. Classical approaches rely on multiple models and on Monte Carlo simulations. The applied perturbations need to be calibrated for the ensemble of simulations to properly sample the uncertainties. Calibrations involve ensemble scores that compare the consistency between the ensemble simulations and the observational data. The computational requirements are so high that designing fast surrogate models or metamodels is often required.

In order to reduce the uncertainties, the fixed or mobile observations of various origins and accuracies can be merged with the simulation results. The uncertainties in the observations and their representativeness also need to be quantified in the process. The assimilation strategy can be formulated in terms of state estimation or parameter estimation (also called inverse modelling). Different algorithms are employed for static and dynamic models, for analyses and forecasts. A challenging question lies in the optimization of the observational network for the assimilation to be the most efficient at a given observational cost.

The main challenge in the study of the non-hydrostatic model is to design a robust and efficient numerical scheme endowed with properties such as: positivity, wet/dry interfaces treatment, consistency. It must be noticed that even if the non-hydrostatic model looks like an extension of the Saint-Venant system, most of the known techniques used in the hydrostatic case are not efficient as we recover strong difficulties encountered in incompressible fluid mechanics due to the extra pressure term. These difficulties are reinforced by the absence of viscous/dissipative terms.

In the quest for a better balance between accuracy and efficiency,
a strategy consists in the adaptation of models.
Indeed, the systems of partial differential equations we consider result from a hierarchy of simplifying
assumptions. However, some of these hypotheses may turn out to be irrelevant locally. The adaptation of
models thus consists in determining areas where a simplified model
(e.g. shallow water type) is valid and
where it is not. In the latter case, we may go back to the “parent” model
(e.g. Euler) in the corresponding area.
This implies to know how to handle the coupling between the aforementioned models from both theoretical
and numerical points of view. In particular, the numerical treatment of transmission conditions is a key point. It requires the estimation of characteristic values (Riemann invariant) which have to be determined according to the regime (torrential or fluvial).

Hydrodynamic models comprise advection and sources terms. The conservation of the balance between source terms, typically viscosity and friction, has a significant impact since the overall flow is generally a perturbation around an equilibrium. The design of numerical schemes able to preserve such balances is a challenge from both theoretical and industrial points of view. The concept of Asymptotic-Preserving (AP) methods is of great interest in order to overcome these issues.

Another difficulty occurs when a term, typically related to the pressure,
becomes very large compared to the order of magnitude of the velocity.
At this regime, namely the so-called low Froude (shallow water)
or low Mach (Euler) regimes, the difference between the speed of the gravity waves
and the physical velocity makes classical numerical schemes inefficient:
firstly because of the error of truncation which is inversely proportional to the small parameters,
secondly because of the time step governed by the largest speed of the gravity wave.
AP methods made a breakthrough in the numerical resolution
of asymptotic perturbations of partial-differential equations concerning the first point.
The second one can be fixed using partially implicit scheme.

Coupling problems also arise within the fluid when it contains pollutants, density variations or biological species. For most situations, the interactions are small enough to use a splitting strategy and the classical numerical scheme for each sub-model, whether it be hydrodynamic or non-hydrodynamic.

The sediment transport raises interesting issues from a numerical aspect. This is an example of coupling between the flow and another phenomenon, namely the deformation of the bottom of the basin that can be carried out either by bed load where the sediment has its own velocity or suspended load in which the particles are mostly driven by the flow. This phenomenon involves different time scales and nonlinear retroactions; hence the need for accurate mechanical models and very robust numerical methods. In collaboration with industrial partners (EDF–LNHE), the team already works on the improvement of numerical methods for existing (mostly empirical) models but our aim is also to propose new (quite) simple models that contain important features and satisfy some basic mechanical requirements. The extension of our 3D models to the transport of weighted particles can also be here of great interest.

Numerical simulations are a very useful tool for the design of new processes, for instance in renewable energy or water decontamination. The optimisation of the process according to a well-defined objective such as the production of energy or the evaluation of a pollutant concentration is the logical upcoming challenge in order to propose competitive solutions in industrial context. First of all, the set of parameters that have a significant impact on the result and on which we can act in practice is identified. Then the optimal parameters can be obtained using the numerical codes produced by the team to estimate the performance for a given set of parameters with an additional loop such as gradient descent or Monte Carlo method. The optimisation is used in practice to determine the best profile for turbine pales, the best location for water turbine implantation, in particular for a farm.

Sustainable development and environment preservation have a growing importance and scientists have to address difficult issues such as: management of water resources, renewable energy production, bio/geo-chemistry of oceans, resilience of society w.r.t. hazardous flows, urban pollutions, ...

As mentioned above, the main issue is to propose models of reduced complexity, suitable for scientific computing and endowed with stability properties (continuous and/or discrete). In addition, models and their numerical approximations have to be confronted with experimental data, as analytical solutions are hardly accessible for these problems/models. A. Mangeney (IPGP) and N. Goutal (EDF) may provide useful data.

Reduced models like the shallow water equations are particularly well-adapted to the modelling of geophysical flows since there are characterized by large time or/and space scales. For long time simulations, the preservation of equilibria is essential as global solutions are a perturbation around them. The analysis and the numerical preservation of non-trivial equilibria, more precisely when the velocity does not vanish, are still a challenge. In the fields of oceanography and meteorology, the numerical preservation of the so-called geostrophic state, which is the balance between the gravity field and the Coriolis force, can significantly improve the forecasts. In addition, data assimilation is required to improve the simulations and correct the dissipative effect of the numerical scheme.

The sediment transport modelling is of major interest in terms of applications, in particular to estimate the sustainability of facilities with silt or scour, such as canals and bridges. Dredging or filling-up operations are expensive and generally not efficient in the long term. The objective is to determine a configuration almost stable for the facilities. In addition, it is also important to determine the impact of major events like emptying dam which is aimed at evacuating the sediments in the dam reservoir and requires a large discharge. However, the downstream impact should be measured in terms of turbidity, river morphology and flood.

It is a violent, sudden and destructive flow. Between 1996 and 2005, nearly 80% of natural disasters in the world have meteorological or hydrological origines. The main interest of their study is to predict the areas in which they may occur most probably and to prevent damages by means of suitable amenities. In France, floods are the most recurring natural disasters and produce the worst damages. For example, it can be a cause or a consequence of a dam break. The large surface they cover and the long period they can last require the use of reduced models like the shallow water equations. In urban areas, the flow can be largely impacted by the debris, in particular cars, and this requires fluid/structure interactions be well understood. Moreover, underground flows, in particular in sewers, can accelerate and amplify the flow. To take them into account, the model and the numerical resolution should be able to treat the transition between free surface and underground flows.

Tsunamis are another hydrological disaster largely studied. Even if the propagation of the wave is globally well described by the shallow water model in oceans, it is no longer the case close to the epicenter and in the coastal zone where the bathymetry leads to vertical accretions and produces substantial dispersive effects. The non-hydrostatic terms have to be considered and an efficient numerical resolution should be induced.

While viscous effects can often be neglected in water flows, they have to be taken into account in
situations such as avalanches, debris flows, pyroclastic flows, erosion processes, ...i.e. when the fluid rheology
becomes more complex. Gravity driven granular flows consist of solid particles commonly mixed with
an interstitial lighter fluid (liquid or gas) that may interact with the grains and decrease the intensity of
their contacts, thus reducing energy dissipation and favoring propagation. Examples include subaerial or
subaqueous rock avalanches (e.g. landslides).

Nowadays, simulations of the hydrodynamic regime of a river, a lake or an estuary, are not restricted to the determination of the water depth and the fluid velocity. They have to predict the distribution and evolution of external quantities such as pollutants, biological species or sediment concentration.

The potential of micro-algae as a source of biofuel and as a technological solution for CO2 fixation is the subject of intense academic and industrial research. Large-scale production of micro-algae has potential for biofuel applications owing to the high productivity that can be attained in high-rate raceway ponds. One of the key challenges in the production of micro-algae is to maximize algae growth with respect to the exogenous energy that must be used (paddlewheel, pumps, ...). There is a large number of parameters that need to be optimized (characteristics of the biological species, raceway shape, stirring provided by the paddlewheel). Consequently our strategy is to develop efficient models and numerical tools to reproduce the flow induced by the paddlewheel and the evolution of the biological species within this flow. Here, mathematical models can greatly help us reduce experimental costs. Owing to the high heterogeneity of raceways due to gradients of temperature, light intensity and nutrient availability through water height, we cannot use depth-averaged models. We adopt instead more accurate multilayer models that have recently been proposed. However, it is clear that many complex physical phenomena have to be added to our model, such as the effect of sunlight on water temperature and density, evaporation and external forcing.

Many problems previously mentioned also arise in larger scale systems like lakes. Hydrodynamics of lakes is mainly governed by geophysical forcing terms: wind, temperature variations, ...

One of the booming lines of business is the field of renewable and decarbonated energies. In particular in the marine realm, several processes have been proposed in order to produce electricity thanks to the recovering of wave, tidal and current energies. We may mention water-turbines, buoys turning variations of the water height into electricity or turbines motioned by currents. Although these processes produce an amount of energy which is less substantial than in thermal or nuclear power plants, they have smaller dimensions and can be set up more easily.

The fluid energy has kinetic and potential parts. The buoys use the potential energy whereas the water-turbines are activated by currents. To become economically relevant, these systems need to be optimized in order to improve their productivity. While for the construction of a harbour, the goal is to minimize swell, in our framework we intend to maximize the wave energy.

This is a complex and original issue which requires a fine model of energy exchanges and efficient numerical tools. In a second step, the optimisation of parameters that can be changed in real-life, such as bottom bathymetry and buoy shape, must be studied. Eventually, physical experiments will be necessary for the validation.

The urban environment is essentially studied for air and noise pollutions. Air pollution levels and noise pollution levels vary a lot from one street to next. The simulations are therefore carried out at street resolution and take into account the city geometry. The associated numerical models are subject to large uncertainties. Their input parameters, e.g. pollution emissions from road traffic, are also uncertain. Quantifying the simulation uncertainties is challenging because of the high computational costs of the numerical models. An appealing approach in this context is the use of metamodels, from which ensembles of simulations can be generated for uncertainty quantification.

The simulation uncertainties can be reduced by the assimilation of fixed and mobile sensors. High-quality fixed monitoring sensors are deployed in cities, and an increasing number of mobile sensors are added to the observational networks. Even smartphones can be used as noise sensors and dramatically increase the spatial coverage of the observations. The processing and assimilation of the observations raises many questions regarding the quality of the measurements and the design of the network of sensors.

There is a growing interest for environmental problems at city scale, where a large part of the population is
concentrated and where major pollutions can occur. Numerical simulation is well established to study the
urban environment, e.g. for road traffic modelling. As part of the smartcity movement, an increasing number
of sensors collect measurements, at traditional fixed observation stations, but also on mobile devices, like
smartphones. They must properly be taken into account given their number but also their potential low quality.

Pratical applications include air pollution and noise pollution. These directly relate to road traffic. Data assimilation and uncertainty propagation are key topics in these applications.

Only few travels were done last year (including one flight), due to the sanitary crisis but also because of a will of the team to avoid this type of transportation.

Part of ANGE activity is devoted to research on renewable energy. In this way, J. Salomon (with J. Ledoux and S. Riffo, former members of the team) submitted a paper about turbine design 23.

The project "Inria Challenge OceanAI" has been accepted. This project is based on an international consorsium, including INRIA Chile and the teams TAO and BIOCORE.

Polyphemus is a modeling system for air quality. As such, it is designed to yield up-to-date simulations in a reliable framework: data assimilation, ensemble forecast and daily forecasts. Its completeness makes it suitable for use in many applications: photochemistry, aerosols, radionuclides, etc. It is able to handle simulations from local to continental scales, with several physical models. It is divided into three main parts:

libraries that gather data processing tools (SeldonData), physical parameterizations (AtmoData) and post-processing abilities (AtmoPy),

programs for physical pre-processing and chemistry-transport models (Polair3D, Castor, two Gaussian models, a Lagrangian model),

model drivers and observation modules for model coupling, ensemble forecasting and data assimilation.

The time parallel solution of optimality systems arising in PDE constraint optimization could be achieved by simply applying any time parallel algorithm, such as Parareal, to solve the forward and backward evolution problems arising in the optimization loop. We propose in 13 a different strategy by devising directly a new time parallel algorithm, which we call ParaOpt, for the coupled forward and backward non-linear partial differential equations. ParaOpt is inspired by the Parareal algorithm for evolution equations, and thus is automatically a two-level method. We provide a detailed convergence analysis for the case of linear parabolic PDE constraints. We illustrate the performance of ParaOpt with numerical experiments both for linear and nonlinear optimality systems.

In 18, we study the long time behaviour of a dynamical system strongly linked to the anti-diffusive scheme of Després and Lagoutiere for the 1-dimensional transport equation. This scheme is overcompressive when the Courant–Friedrichs–Levy number is 1/2: when the initial data is nondecreasing, the approximate solution becomes a Heaviside function. In a special case, we also understand how plateaus are formed in the solution and their stability, a distinctive feature of the Després and Lagoutiere scheme.

In 12, the approximation of problems with linear convection and degenerate nonlinear diffusion, which arise in the framework of the transport of energy in porous media with thermodynamic transitions, is done using a

The gradient discretisation method (GDM) is a generic framework for the spatial discretisation of partial differential equations. The goal of 17 is to establish an error estimate for a class of degenerate parabolic problems, obtained under very mild regularity assumptions on the exact solution. Our study covers well-known models like the porous medium equation and the fast diffusion equations, as well as the strongly degenerate Stefan problem. Several schemes are then compared in a last section devoted to numerical results.

In 10 (see also 11), we present several time dependent analytical solutions for the incompressible Euler system with free surface. These analytical solutions give quantitative descriptions of some physical phenomena, such as water motion or waves on space variable topography, and can be used as reference solutions when validating numerical simulation codes. They concern fluid flows governed by Euler equations with or without hydrostatic hypothesis including wet/dry interface, variable density and wide variety of boundary conditions.

In 21, we investigate a new homogeneous relaxation model describing thebehaviour of a two-phase fluid flow in a low Mach number regime, which can be obtained as a lowMach number approximation of the well-known HRM. For this specific model, we derive an equationof state to describe the thermodynamics of the two-phase fluid. We prove some theoretical propertiessatisfied by the solutions of the model, and provide a well-balanced scheme.To go further, we investigate the instantaneous relaxation regime, and prove the formal convergenceof this model towards the low Mach number approximation of the well-known HEM. An asymptotic-preserving scheme is introduced to allow numerical simulations of the coupling between spatial regionswith different relaxation characteristic times.

In 14, we are interested in the numerical modeling of body floating freely on the water such as icebergs or wave energy converters. The fluid-solid interaction is formulated using a congested shallow water model for the fluid and Newton's second law of motion for the solid. We make a particular focus on the energy transfer between the solid and the water since it is of major interest for energy production. A numerical approximation based on the coupling of a nite volume scheme for the fluid and a Newmark scheme for the solid is presented. An entropy correction based on an adapted choice of discretization for the coupling terms is made in order to ensure a dissipation law at the discrete level. Simulations are presented to verify the method and to show the feasibility of extending it to more complex cases.

In 6, we propose a numerical method for a family of two-dimensional dispersive shallow water systems with topography. The considered models consist in shallow water approximations without the hydrostatic assumption-of the incompressible Euler system with free surface. Hence, the studied models appear as extensions of the classical shallow water system enriched with dispersive terms. The model formulation motivates to use a prediction-correction scheme for its numerical approximation. The prediction part leads to solving a classical shallow water system with topography while the correction part leads to solving an elliptic-type problem. The numerical approximation of the considered dispersive models in the two-dimensional case over unstructured meshes is described, it requires to combine finite volume and finite element techniques. A special emphasis is given to the formulation and the numerical resolution of the correction step (variational formulation, inf-sup condition, boundary conditions,.. .). The numerical procedure is confronted with analytical and experimental test cases. Finally, an application to a real tsunami case is given.

In 19, we are interested in free surface flows where density variations coming e.g. from temperature or salinity differences play a significant role in the hydrodynamic regime. In water, acoustic waves travel much faster than gravity and internal waves, hence the study of models arising in compressible fluid mechanics often requires a decoupling between these waves. Starting from the compressible Navier-Stokes system, we derive the so-called Navier-Stokes-Fourier system in an incompressible context (the density does not depend on the fluid pressure) using the low-Mach scaling. Notice that a modified low-Mach scaling is necessary to obtain a model with a thermo-mechanical compatibility. The case where the density depends only on the temperature is studied first. Then the variations of the fluid density with respect to the temperature and the salinity are considered. We give a layer-averaged formulation of the obtained models in an hydrostatic context. Allowing to derive numerical schemes endowed with strong stability properties – that are presented in a companion paper – the layer-averaged formulation is very useful for the numerical analysis and the numerical simulations of the models. Several stability properties of the layer-averaged Navier-Stokes-Fourier system are proved.

In 20, we propose a numerical scheme for the layer-averaged Euler with variable density and the Navier-Stokes-Fourier systems presented in part I (Boittin et al., 2018). These systems model hydrostatic free surface flows with density variations. We show that the finite volume scheme presented is well balanced with regards to the steady state of the lake at rest and preserves the positivity of the water height. A maximum principle on the density is also proved as well as a discrete entropy inequality in the case of the Euler system with variable density. Some numerical validations are finally shown with comparisons to 3D analytical solutions and experiments.

In 22, an air quality model at urban scale computes the air pollutant concentrations at street resolution based on various emissions, meteorology, imported pollution and city geometry. Because of the computational cost of such model, we previously designed a metamodel using dimension reduction and statistical emulation, and then corrected this metamodel with observational data. Novel work was dedicated to the error modeling for a more balanced integration of the observations. The work was also applied to air quality simulation over Paris using several months of data.

In 15, a new approach is proposed to generate urban noise maps. In an urban area, it is increasingly common to have access to both a simulated noise map and a sensor network. A data assimilation algorithm is developed to combine data from both a noise map simulator and a network of acoustic sensors. One-hour noise maps are generated with a meta-model fed with hourly traffic and weather data. The data assimilation algorithm merges the simulated map with the sound level measurements into an improved noise map. The performance of this method relies on the accuracy of the meta-model, the input parameters selection and the model of the error covariance that describes how the errors of the simulated sound levels are correlated in space. The performance of the data assimilation is obtained with a leave-one-out cross-validation method.

In 8, simulation is used to predict the spread of a wildland fire across land in real-time. Nevertheless, the large uncertainties in these simulations must be quantified in order to provide better information to fire managers. Ensemble forecasts are usually applied for this purpose, with an input parameter distribution that is defined based on expert knowledge. A novel approach is proposed in order to generate calibrated ensembles whose input distribution is defined by a posterior PDF with a pseudo-likelihood function that involves the Wasserstein distance between simulated and observed burned surfaces of several fire cases. Due to the high dimension and the computational requirements of the pseudo-likelihood function, a Gaussian process emulator is built to obtain a sample of the calibrated input distribution with a MCMC algorithm in about one day of computation on 8 computing cores. The calibrated ensembles lead to better overall accuracy than the uncalibrated ensembles. The a posteriori probability distribution of the inputs favors lower values of rate of spread and lower uncertainty in wind direction. This strongly limits overprediction, while keeping the ability of the ensemble to cover the observed burned area.

In 7, numerical simulations of wildfire spread can provide support in deciding firefighting actions but their predictive performance is challenged by the uncertainty of model inputs stemming from weather forecasts, fuel parameterisation and other fire characteristics. In this study, we assign probability distributions to the inputs and propagate the uncertainty by running hundreds of Monte Carlo simulations. The ensemble of simulations is summarised via a burn probability map whose evaluation based on the corresponding observed burned surface is not obvious. We define several properties and introduce probabilistic scores that are common in meteorological applications. Based on these elements, we evaluate the predictive performance of our ensembles for seven fires that occurred in Corsica from mid-2017 to early 2018. We obtain fair performance in some of the cases but accuracy and reliability of the forecasts can be improved. The ensemble generation can be accomplished in a reasonable amount of time and could be used in an operational context provided that sufficient computational resources are available. The proposed probabilistic scores are also appropriate in a calibration process to improve the ensembles.

Members: A. El Baz, J. Sainte-Marie

Several improvements of FreshKiss3D software have been made:

Yohan Penel started in september to supervised the PhD thesis of Giuseppe Parasiliti about the Physical, mathematical and numerical modelling of a gas flow for the transportation of liquified natural gas. This work is the result of a close collaboration with the corporation GTT, which has already collaborated with ANGE in the last years, through the Carnot institute SMILE.

The ANR project Cense supports the Ph. D. Thesis of A. Lesieur on the developpment of a new methodology for the production of more realistic noise maps. The industrial collaborations include:

We refer to

https://

https://

for more details.

The goal of this project is to design and develop large-scale parallel algorithms for dealing with optimization problems involving wave phenomena. Such problems arise in many practical applications: two examples are in seismic inversion, where one tries to deduce the geology of rock formations that best fits the available seismic data, and in wave localization, which can be used to improve the efficiency of wireless charging devices. To make the optimization tractable, parallel computers must be used to cope with the large amounts of data and intensive computation inherent to these problems. In the last decade, parallel-in-time methods have made enormous progress: for parabolic problems, a near-optimal scaling with respect to the number of processors has been achieved (scalability). For wave propagation, there has been no such success.

Starting with the Inria Challenge Project grant "OcéanIA", ANGE (J. Salomon and J. Sainte-Marie) is now working on AI, Data, and Models for understanding Oceans and Climate Change through a collaboration with Nayat Sánchez Pi and Luis Martí from INRIA CHile. This project also involves the INRIA teams TAO and BIOCORE. A proceeding has been published about this project 16.

J. Salomon is still closely collaborating with Felix Kwok (Now Prof. in Laval University, Quebec) and Martin Gander (Univeristy of Geneva) about time parallelization for control and assimilation problems (see 13, and its description in Section 8).

J. Salomon has also a long term collaboration with G. Ciaramella (presently moving to Polytechinco

Y. Penel is still collaborating with the University of Sevilla, though he had to interrupt his spring visit in March.

Y. Penel invited a Serbian doctoral student from 14/09 to 12/10 (Davor Kumozec).

Due to the COVID-19 crises, J. Salomon was not able to invite F. Kwok and G. Ciaramella.

Mean field game theory (MFG) is a new and active field of mathematics, which analyses the dynamics of a very large number of agents. Introduced about ten years ago, MFG models have been used in different fields: economics, finance, social sciences, engineering,... MFG theory is at the intersection of mean field theory, mathematical game theory, optimal control, stochastic analysis, variation calculation, partial differential equations and scientific calculation. Drawing on an internationally recognized French team on the subject, the project seeks to obtain major contributions in 4 main directions: the "medium field" aspect (i.e., how to obtain macroscopic models from microscopic models); the analysis of new MFG systems; their numerical analysis; the development of new applications. In this period of rapid expansion of MFG models, the project seeks to foster French leadership in the field and attract new researchers from related fields.

CHARMS ANR project is focused on the mathematical methods and software tools dedicated to the simulation of the physical models issued from geothermal engineering. The final objective is the achievement of a highly parallel code, validated on realistic cases.

EGRIN stands for Gravity-driven flows and natural hazards. J. Sainte-Marie is the head of the scientific committee of this CNRS research group and A. Mangeney is a member of the committee. Other members of the team involved in the project are local correspondents. The scientific goals of this project are the modelling, analysis and simulation of complex fluids by means of reduced-complexity models in the framework of geophysical flows.

OBJECTIVES : To promote the dissemination of existing knowledge and expertise within and across disciplines. The GDR EMR is a forum for the exchange of expertise and know-how within and across disciplines. To promote the implementation of collaborations, between partners of the GDR and with the industrial fabric. The GDR is an entry and orientation point. It provides a forum for the exchange of information concerning industrial needs and the skills of the academic community; and enables the bringing together of players. Valuing the national scientific community The GDR EMR gives visibility to the community, in particular through the development of a mapping of the actors and themes available on the web platform

The goal of the FireCaster project is to prototype a fire decision support system at the national scale to estimate upcoming fire risk (H+24 to H+48) and in case of crisis, to predict fire front position and local pollution (H+1 to H+12).

The CENSE project aims at proposing a new methodology for the production of more realistic noise maps, based on an assimilation of simulated and measured data through a dense network of low-cost sensors.

E. Godlewski was in the organizing comittee of "l'Année des mathématiques 2020".

J. Salomon and Y. Penel organized with AMIES a Math-Industry Meeting (RMI) "RMI climat et environnement" https://

The reviewing activities from ANGE team are summarized in Table 1.

Invited talks (mainly online this year) of the members of the team are summarized in Table 2.

The administrative activities of ANGE are summarized in Table 3.

Teaching activities of ANGE are summarized in the following.

Supervision activities of ANGE are summarized in Table 4.