Coastal areas are increasingly threatened by global warming-induced sea level rise. At the same time, 60% of the world population lives in a 100 km wide coastal strip (80% within 30 km from the shore in French Brittany).
This is why coastlines are concerned with many issues of various types: economical, ecological, social, political, etc. Coastal areas are natural interfaces between various media (e.g. wind/sea/sand/land). The physical processes acting on these media have very different time scales, hence the need to build complex systems coupling nonlinear partial differential equations and random processes to describe them.
To address these crucial issues, LEMON is an interdisciplinary team working on the design, analysis and application of deterministic and stochastic models for inland and marine littoral processes, with an emphasis on both standalone models and hybrid systems.

The spot of Montpellier offers large opportunities:

The general scope of the LEMON project-team is to develop mathematical and computational methods for the modelling of hydraulic and hydrodynamic processes. The mathematical tools used are deterministic (PDEs, ODEs) and/or probabilistic (extreme value theory). Applications range from regional oceanography to coastal management, including risk assessment for natural hazards on the coastline (submersion and urban floods, tsunamis, pollution).

LEMON is a common research team between HSM (UM, CNRS, IRD), IMAG (UM, CNRS) and Inria, whose faculty members have never been associated to Inria groups in the past. All fellows share a strong background in mathematical modelling, together with a taste for applications to the littoral environment. As reflected in the team contributions, the research conducted by LEMON is interdisciplinary 3, thanks to the team members expertise (deterministic and stochastic modelling, computational and experimental aspects) and to regular collaborations with scientists from other domains. We believe this is both an originality and a strength for LEMON .

The team has three main scientific objectives. The first two are the development of new physical models and innovative mathematical methods for urban floods on the one hand and natural flows on the other hand. The third objective is to develop theoretical tools that can be used in the models serving the first two objectives. As mentioned above, the targeted applications cover PDE models and associated extreme events using a hierarchy of models of increasing complexity.

In each section, people involved in the project are listed in alphabetical order.

In the context of climate change, the increase in urbanization, particularly in floodplains or near the seashore, could lead to an increase in vulnerability to flooding. Numerical models appear to be indispensable tools for predicting the impact of floods with different return periods and for evaluating mitigation and land-use planning policies 69.

Urban areas are characterized by significant variation in small-scale geometry and complex flows involving various phenomena 48, 49. LEMON 's activities focus on the development of numerical models and methodologies, specifically designed for the urban context and for operational purposes.

Simulating urban floods requires at least two-dimensional shallow water approaches and considerable computational power. Capturing the relevant hydraulic detail often requires computational cell sizes smaller than one meter. For instance, meshing a complete urban area with a sufficient accuracy would require

A new generation of models overcoming this issue has emerged over the last 20 years: porosity-based shallow water models. They are obtained by averaging the two-dimensional shallow water equations over large areas containing both water and a solid phase 44. The size of a computational cell can be increased by a factor 10 to 50 compared to a 2D shallow water model, with CPU times reduced by 2 to 3 orders of magnitude 66. While the research on porosity-based shallow water models has accelerated over the past decade 61, 80, 84, 57, 55, 66, 75, 76, 91, 92, a number of research issues remain pending.

The research objectives are (i) to improve the upscaling of the flux and source term models to be embedded in porosity shallow water models and (ii) to validate these models against laboratory and in situ measurements. Improving the upscaled flux and source term models for urban applications requires that description of anisotropy in porosity models be improved to account for the preferential flows induced by building and street alignment. The description of the porosity embedded in the most widespread porosity approach, the so-called Integral Porosity model 80, 59, has been shown to provide an incomplete description of the connectivity properties of the urban medium. Firstly, the governing equations are strongly mesh-dependent because of consistency issues 59. Secondly, the flux and source term models fail to reproduce the alignment with the main street axes in a number of situations 58. Another path for improvement concerns the upscaling of obstacle-induced drag terms in the presence of complex geometries for which recent results suggest that the effects of microtopography on the flow cannot be upscaled using "classical" equation-of-state approaches, as done in most hydraulic models 56.

During this 4-year period, we will work to develop and validate improved flux and source term closures in the presence of strongly anisotropic urban geometries and in the presence of strongly variable topography. Validation will involve not only the comparison of porosity model outputs with refined flow simulation results, but also the validation against experimental data sets and real scale events. No experimental data set allowing for a sound validation of flux closures in porosity models can be found in the literature. Laboratory experiments will be developed specifically in view of the validation of porosity models. Such experiments will be set up and carried out in collaboration with the Université Catholique de Louvain (UCL), that has an excellent track record in experimental hydraulics and the development of flow monitoring and data acquisition equipment. These activities will take place in the framework of the PoroCity Associate International Laboratory (see next paragraph).

The assessment of urban flood risk in urban areas requires the knowledge of the flow variables with a resolution of typically 1 meter or less. In practice, such High Resolution (HR) knowledge is not available because the existing modelling tools are too slow and require too much manpower for operational applications. To give but one example, simulating an urban flood using standard shallow water models over a 1 km

The team intends to explore two paths.

Classical practices for urban flood modelling consider the buildings as full or partial obstruction to the flow 82 but neglect in any case the street-building flow exchanges 87 although these exchanges can (i) create a flood retention within blocks and potential reduction of the peak discharge 52, (ii) produce secondary connection between streets through block and (iii) explain most part of the damages at the building scale 86, 34. The lack of models to represent such a phenomenon forces, in coherence with an operational application for the classical approaches of management and characterization of flood risks, to assume that the water level inside the building is the same as in the immediate external neighborhood.

In the context of the PhD thesis of Cécile Choley, new modelling approaches accounting for the street-building exchanges during flood event are developed. First milestones are reached with the determination of discharge law through the opening based on detailed 3D simulations and the implementation of such law in the 2D modelling software SW2D-LEMON (see section 6). A sensitivity analysis of the model answer to the building parameters should allow to define the required level of precision in the building data for an operational purpose.

This first research paves the way to coupling between deterministic and stochastic modelling approaches to represent with a probability law the undetermined building parameters (resistance of a door, room organization within the building, ...). Similar approaches might be applied to car-jam effect during urban flood event.

The natural processes that take place in the coastal zone are numerous, complex and often integrate different space and time scales. They therefore require adapted models, both in terms of mathematical descriptions (based on differential equations) but also in terms of statistical processing.

Simulating detailed free surface flows in wetlands requires considerable computational power and two-dimensional shallow water models are frequently needed. Typical issues arise when modelling wetlands and coastal lagoons, where large areas are often connected by an overwhelming number of narrow channels, obstructed by vegetation and a strongly variable bathymetry. Describing such channels with the level of detail required in a 2D model is impracticable.

A new generation of models overcoming this issue has emerged over the last 20 years: porosity-based shallow water models obtained by averaging the two-dimensional shallow water equations over large areas containing both water and a solid phase. In the specific fields of natural flows, Depth-Dependent Porosity (DDP) models have been designed to account for microtopography variation 44, 75, 76, 56.

DDP models pave the way for various operational applications, from local rainfall runoff on agricultural parcel to flood propagation in large catchment. However potential hyperbolicity loss can appear in the model for some particular parametrization 56.

The research objectives are:

Water bodies such as lakes or coastal lagoons (possibly connected to the sea) located in high human activity areas are subject to various kinds of stress such as industrial pollution, high water demand or bacterial blooms caused by freshwater over-enrichment. For obvious environmental reasons, these water resources have to be protected, hence the need to better understand and possibly control such fragile ecosystems to eventually develop decision-making tools. From a modelling point of view, they share a common feature in that they all involve interacting biological and hydrological processes. According to 50, models may be classified into two main types: “minimal dynamic models” and “complex dynamic models”. These two model types do not have the same objectives. While the former are more heuristic and rather depict the likelihood of considered processes, the latter are usually derived from fundamental laws of biochemistry or fluid dynamics. Of course, the latter require much more computational resources than the former. In addition, controlling such complex systems (usually governed by PDEs) is by far more difficult than controlling the simpler ODE-driven command systems.

LEMON has already contributed both to the reduction of PDE models for the simulation of water confinement in coastal lagoons 51, 32 and to the improvement of ODE models in order to account for space-heterogeneity of bioremediation processes in water resources 30.

In collaboration with colleagues from the ANR-ANSWER project and colleagues from INRAE, our ambition is to improve existing models of lagoon/marine ecosystems by integrating both accurate and numerically affordable coupled hydrobiological systems. A major challenge is to find an optimal trade-off between the level of detail in the description of the ecosystem and the level of complexity in terms of number of parameters (in particular regarding the governing equations for inter-species reactions). The model(s) should be able to reproduce the inter-annual variability of the observed dynamics of the ecosystem in response to meteorological forcing. This will require the adaptation of hydrodynamics equations to such time scales (reduced/upscaled models such as porosity shallow water models (see Section 3.1.1) will have to be considered) together with the coupling with the ecological models. At short time scales (i.e. the weekly time scale), accurate (but possibly CPU-consuming) 3D hydrodynamic models processes (describing thermal stratification, mixing, current velocity, sediment resuspension, wind waves...) are needed. On the longer term, it is intended to develop reduced models accounting for spatial heterogeneity.

Over the 4-year period, the team will focus on two main application projects:

The expertise of LEMON in this scientific domain is more in the introduction and analysis of new boundary conditions for ocean modelling systems, that can be tested on academical home-designed test cases. This is at the core of Antoine Rousseau's contributions over the past years. The real implementation, within operational ocean models, has to be done thanks to external collaborations which have already started with LEMON (see below).

In physical oceanography, all operational models - regardless of the scale they apply to - are derived from the complete equations of geophysical fluid dynamics. Depending on the considered process properties (nonlinearity, scale) and the available computational power, the original equations are adapted with some simplifying hypotheses. The reader can refer to 79, 70 for a hierarchical presentation of such models.

In the shoaling zone, the hydrostatic approximation that is used in most large scales models (high sea) cannot be used without a massive loss of accuracy. In particular, shallow water models are inappropriate to describe the physical processes that occur in this zone (see Figure 1). This is why Boussinesq-type models are preferred 67. They embed dispersive terms that allow for shoaling and other bathymetry effects. Since the pioneering works of Green and Naghdi 54, numerous theoretical and numerical studies have been delivered by the "mathematical oceanography" community, more specifically in France (see the works of Lannes, Marche, Sainte-Marie, Bresch, etc.). The corresponding numerical models (BOSZ, WaveBox) must thus be integrated in any reasonable nearshore modelling platform.

However, these models cannot simply replace all previous models everywhere in the ocean: dispersive models are useless away from the shore and it is known that wave breaking cannot be simulated using Boussinesq-type equations. Hence the need to couple these models with others. Some work has been done in this direction with a multi-level nesting using software packages such as ROMS, but to the best of our knowledge, all the "boxes" rely on the same governing equations with different grid resolutions. A real coupling between different models is a more difficult task since different models may have different mathematical properties, as shown in the work by Eric Blayo and Antoine Rousseau on shallow water modelling 33.

Starting from the knowledge acquired in the collaboration with Eric Blayo on model coupling using domain decomposition techniques, our ambition is to propose theoretical and numerical tools in order to incorporate nearshore ocean models into large complex systems including several space and time scales. This is one of the scientific objectives of the Inria challenge SURF, led by Arthur Vidard (Inria Grenoble). Two complementary research directions are considered:

In the context of mathematical modelling and numerical simulation for the marine energy, we want to build a coupled numerical model that would be able to simulate wave propagation in domains covering both off-shore regions, where spectral models are used, and nearshore regions, better described by nonlinear dispersive (Boussinesq-type) models.

While spectral models work with a statistical and phase-averaged description of the waves, solving the evolution of its energy spectrum, Boussinesq-type models are phase-resolving and solves nonlinear dispersive shallow water equations for physical variables (surface elevation and velocity) in the time domain. Furthermore, the time and space scales are very different: they are much larger in the case of spectral models, which justifies their use for modelling off-shore propagation over large time frames. Moreover, important small scale phenomena in nearshore areas are better captured by Boussinesq models, in which the time step is limited by the CFL condition.

From a mathematical and modelling point of view, this task mainly consists in working on the boundary conditions of each model, managing the simultaneous use of spectral and time series data, while studying transparent boundary conditions for the models and developing domain decomposition approaches to improve the exchange of information.

In addition to the application-driven sections, the team also works on the following theoretical questions. They are clearly connected to the above-mentioned scientific issues but do not correspond to a specific application or process.

Max-stable random fields 83, 81, 64, 41, 73 are the natural limit models for spatial maximum data and have spawned a very rich literature. An overview of typical approaches to modelling maxima is due to 43. Physical interpretation of simulated data from such models can be discussed. An alternative to the max-stable framework are models for threshold exceedances. Processes called GPD processes, which appear as a generalization of the univariate formalism of the high thresholds exceeding a threshold based on the GPD, have been proposed 47, 89. Strong advantages of these thresholding techniques are their capability to exploit more information from the data and explicitly model the original event data. However, the asymptotic dependence stability in these limiting processes for maximum and threshold exceedance tends to be overly restrictive when asymptotic dependence strength decreases at high levels and may ultimately vanish in the case of asymptotic independence. Such behaviors appear to be characteristic for many real-world data sets such as precipitation fields 42, 88. This has motivated the development of more flexible dependence models such as max-mixtures of max-stable and asymptotically independent processes 94, 29 for maxima data, and Gaussian scale mixture processes 74, 63 for threshold exceedances. These models can accommodate asymptotic dependence, asymptotic independence and Gaussian dependence with a smooth transition.

Extreme events also generally present a temporal dependence 90. Developing flexible space-time models for extremes is crucial for characterizing the temporal persistence of extreme events spanning several time steps; such models are important for short-term prediction in applications such as the forecasting of wind power and for extreme event scenario generators providing inputs to impact models, for instance in hydrology and agriculture. Currently, only few models are available from the statistical literature (see for instance 39, 40, 62) and remain difficult to interpret.

The objective is to extend state-of-the-art methodology with respect to three important aspects:

In a natural way, the Cerise and Fraise project5 members are the main collaborators for developing and studying new stochastic models for extremes.

Reproducing optimally realistic spatio-temporal rainfall fields is of salient importance to the forcing of hydrodynamic models. This challenging task requires combining intense, usual and dry weather events. Far from being straightforward, this combination of extreme and non-extreme scenarii requires a realistic modelling of the transitions between normal and extreme periods. 71 have proposed, in a univariate framework, a statistical model that can serve as a generator and that takes into account low, moderate and intense precipitation. In the same vein, 93 developed a bivariate model. However, its extension to a spatial framework remains a challenge. Existing spatial precipitation stochastic generators are generally based on Gaussian spatial processes 31, 68, that are not adapted to generate extreme rainfall events. Recent advances in spatio-temporal extremes modelling based on generalized Pareto processes 47, 89 and semi-parametric simulation techniques 36 are very promising and could form the base for relevant developments in our framework.

The purpose is to develop stochastic methods for the simulation of realistic spatio-temporal processes integrating extreme events. Two steps are identified. The first one is about the simulation of extreme events and the second one concerns the combination of extreme and non extreme events in order to build complete, realistic precipitations time series. As far as the first step is concerned, a first task is to understand and to model the space-time structure of hydrological extremes such as those observed in the French Mediterranean basin, that is known for its intense rainfall events (Cevenol episodes), which have recently received increased attention. We will propose modelling approaches based on the exceedance, which allows the simulated fields to be interpreted as events. Parametric, semi-parametric and non-parametric approaches are currently under consideration. They would allow a number of scientific locks to be removed. Examples of such locks are e.g. accounting for the temporal dimension and for various dependence structures (asymptotic dependence or asymptotic independence possibly depending on the dimension and/or the distance considered). Methodological aspects are detailed in Section 3.3.1. The second step, which is not straightforward, consists in combining different spatio-temporal simulations in order to help to ultimately develop a stochastic precipitation generator capable of producing full precipitation fields, including dry and non-extreme wet periods.

The Cerise (2016-2018) and Fraise (2019-2021) projects (see 9.3), led by Gwladys Toulemonde, are funded by the action MANU (MAthematical and Numerical methods) of the CNRS LEFE program 6. Among others, they aim to propose methods for simulating scenarii integrating spatio-temporal extremes fields with a possible asymptotic independence for impact studies in environmental sciences. Among the members of this project, Jean-Noel Bacro (IMAG, UM), Carlo Gaetan (DAIS, Italy) and Thomas Opitz (BioSP, MIA, INRAE) are involved in the first step as identified in the research objectives of the present sub-section. Denis Allard (BioSP, MIA, INRAE) and Philippe Naveau (CNRS, LSCE) will be involved in the second one.

Numerical modelling requires data acquisition at several steps such as model validation, parameter assessment and practical implementation for operational use. The following two paths for research are devoted to data exploitation for hydraulic models: from data assimilation for model calibration to heterogeneous data fusion for a better knowledge of geometries and forcings.

a. Parametrization of shallow water models with porosity

Model benchmarking against laboratory experiments is an essential step and is an integral part of the team's strategy. However, scale model experiments may have several drawbacks: (i) experiments are very expensive and extremely time-consuming, (ii) experiments cannot always be replicated, and measurement have precision and reliability limitations, (iii) dimensional similarity (in terms of geometry and flow characteristic variables such as Froude or Reynolds numbers) cannot always be preserved.

An ideal way to obtain data would be to carry out in situ measurements. But this would be too costly at the scale of studied systems, not to mention the fact that field may become impracticable during flood periods.

Geographical and remote sensing data are becoming widely available with high spatial and temporal resolutions. Several recent studies have shown that flood extents can be extracted from optical or radar images 53, for example: to characterize the flood dynamics of great rivers 72, to monitor temporary ponds 85, but also to calibrate hydrodynamics models and assess roughness parameters (e.g. 95).

Upscaled models developed in LEMON (see 3.1.1 and 3.2.1) embed new parameters that reflect the statistical properties of the medium geometry and the subgrid topography. New methods are thus to be developed to characterize such properties from remote sensing and geographical data.

This research line consists in deriving methods and algorithms for the determination of upscaled model parameters from geodata.

For applications in urban areas, it is intended to extract information on the porosity parameters from National geographical survey databases largely available in developed countries. Such databases usually incorporate separate layers for roads, buildings, parking lots, yards, etc. Most of the information is stored in vector form, which can be expected to make the treatment of urban anisotropic properties easier than with a raster format. In developing countries, data is made increasingly available over the world thanks to crowdsourcing (e.g. OpenStreetMap) the required level of detail sometimes not available in vector format, especially in suburban areas, where lawns, parks and other vegetated areas, that may also contribute to flood propagation and storage, are not always mapped. In this context, the necessary information can be extracted from aerial and/or satellite images, that are widely available and the spatial resolution of which improves constantly, using supervised classification approaches.

For applications in great rivers the main objective is to develop an efficient framework for optimally integrating remote sensing derived flood information to compensate the lack of observation related to riverbed bathymetry and river discharge. The effective integration of such remote sensing-derived flood information into hydraulic models remains a critical issue. In partnership with R. Hostache (LIST), we will investigate new ways for making use of Satellite Earth Observation data (i.e. flooded areas and water level estimates derived from Synthetic Aperture Radar (SAR) data collections) for retrieving uncertain model parameters and boundary conditions. The method will be developed and validated using synthetically generated data sets as well as real-event data retrieved from the European Space Agency’s archives. Extensive testing will be carried out in a number of high magnitude events recorded over the Severn (United Kingdom) and Zambezi (Mozambique) floodplain areas.

In wetlands applications, connectivity between different ponds is highly dependent on the free surface elevation, thus conditioning the presence of a flow. Characterizing such connectivity requires that topographical variations be known with high accuracy. Despite the increased availability of direct topographic measurements from LiDARS on riverine systems, data collection remains costly when wide areas are involved. Data acquisition may also be difficult when poorly accessible areas are dealt with. If the amount of topographic points is limited, information on elevation contour lines can be easily extracted from the flood dynamics visible in simple SAR or optical images. A challenge is thus to use such data in order to estimate continuous topography on the floodplain combining topographic sampling points and located contour lines the levels of which are unknown or uncertain.

b. Data fusion and completion

Assuming that a given hydrodynamic models is deemed to perform satisfactorily, this is far from being sufficient for its practical application. Accurate information is required concerning the overall geometry of the area under study and model parametrization is a necessary step towards the operational use. Moreover, the considered flow models are deterministic and must be conditioned by some forcings, such as rainfall events. When large areas are considered, data acquisition may turn out prohibitive in terms of cost and time, not to mention the fact that information is sometimes not accessible directly on the field. To give but one example, how can the roughness of an underground sewer pipe be measured? A strategy should be established to benefit from all the possible sources of information in order to gather data into a geographical database, along with confidence indexes.

The assumption is made that even hardly accessible information often exists. This stems from the increasing availability of remote-sensing data, to the crowd-sourcing of geographical databases, including the inexhaustible source of information provided by the Internet. However, information remains quite fragmented and stored in various formats: images, vector shapes, texts, etc.

This path of research began with the Cart'Eaux project (2015-2018), that aims to produce regular and complete mapping of urban wastewater system. Contrary to drinkable water networks, the knowledge of sewer pipe location is not straightforward, even in developed countries. Over the past century, it was common practice for public service providers to install, operate and repair their networks separately 78. Now local authorities are confronted with the task of combining data produced by different parts, having distinct formats, variable precision and granularity 37.

The overall objective of this research line is to develop methodologies to gather various types of data in the aim of producing an accurate mapping of the studied systems for hydrodynamics models.

Concerning wastewater networks, the methodology applied consists in inferring the shape of the network from a partial dataset of manhole covers that can be detected from aerial images 77, 38. Since manhole covers positions are expected to be known with low accuracy (positional uncertainty, detection errors), a stochastic algorithm is set up to provide a set of probable network geometries 35. As more information is required for hydraulic modelling than the simple mapping of the network (slopes, diameters, materials, etc.), text mining techniques such as used in 65 are particularly interesting to extract characteristics from data posted on the Web or available through governmental or specific databases. Using an appropriate keyword list, thematic entities are identified and linked to the surrounding spatial and temporal entities in order to ease the burden of data collection. It is clear at this stage that obtaining numerical values on specific pipes will be challenging. Thus, when no information is found, decision rules will be used to assign acceptable numerical values to enable the final hydraulic modelling.

Concerning rain inputs, instrumental configurations often present sparsity and do not correspond to the required spatiotemporal resolutions (on the order of km and hour, or even 100 meters and a few minutes in urban environments). To overcome this problem, LEMON contributes to the establishment of an urban observatory. This observatory creates a dense network of about twenty rain gauges on the Triolet campus of the University of Montpellier and surrounding areas at extremely fine spatial and temporal resolutions never achieved in the region until now. Other data sources can be used to complement the information provided by the observation network. Combining these heterogeneous, multi-source, multi spatial and temporal resolution data is a difficult task, rain being one of the most complex climatic processes due to (i) its binary nature (presence/absence), (ii) the importance of aggregation of strong values over space and time (floods), and (iii) the strong variations that can appear even at very small spatial and temporal scales. This data aggregation step is nevertheless essential because the quality and resolution of the data will determine the quality of the synthetic rainfall that can be simulated.

In any case, the confidence associated to each piece of data, be it directly measured or reached from a roundabout route, should be assessed and taken into account in the modelling process. This can be done by generating a set of probable inputs (geometry, boundary conditions, forcing, etc.) yielding simulation results along with the associated uncertainty.

Combining heterogeneous data for a better knowledge of studied systems raises the question of data fusion. What is the reality when contradictory information is collected from different sources? Dealing with spatial information, offset are quite frequent between different geographical data layers; pattern comparison approaches should be developed to judge whether two pieces of information represented by two elements close to each other are in reality identical, complementary, or contradictory.

Porosity-based shallow water models are governed by hyperbolic systems of conservation laws. The most widespread method used to solve such systems is the finite volume approach. The fluxes are computed by solving Riemann problems at the cell interfaces. This requires that the wave propagation properties stemming from the governing equations be known with sufficient accuracy. Most porosity models, however, are governed by non-standard hyperbolic systems.

Firstly, the most recently developed Dual Integral Porosity (DIP) models include a momentum source term involving the divergence of the momentum fluxes 60. This source term is not active in all situations but takes effect only when positive waves are involved 57, 58. The consequence is a discontinuous flux tensor and discontinuous wave propagation properties. The consequences of this on the existence and uniqueness of solutions to initial value problems (especially the Riemann problem) are not known, or are the consequences for the accuracy of the numerical methods used to solve this new type of equations.

Secondly, most applications of these models involve anisotropic porosity fields 66, 80. Such anisotropy can be modelled using

Thirdly, the Riemann-based finite volume solution of the governing equations require that the Riemann problem be solved in the presence of a porosity discontinuity. While recent work 46 has addressed the issue for the single porosity equations, similar work remains to be done for integral- and multiple porosity-based models.

The four year research objectives are the following:

Owing to the limited staff of the LEMON team, external collaborations will be sought with researchers in applied mathematics. Examples of researchers working in the field are

The protection of coastal areas around the world has become an important issue of concern, including within the scientific community. The coastline is defined as the physical separation between the sea or ocean on the one hand and the inland on the other, but these two worlds are in fact intertwined, which contributes to the difficulty of their modelling, both from a physical and statistical point of view.

Wave propagation models in the nearshore zone have evolved significantly over the last 15 years, with contributions that increasingly take into account effects related to variations of bathymetry, hence the non-hydrostatic nature of the flow. These models, very specific to the coastal zone, must be able to be coupled (together and with external models) so as to allow wave propagation numerical models to be integrated into numerical forecasting platforms, both in oceanography and in flood risk management.

Due to climate change and rising sea levels, more and more cities are facing the risk of flooding. Whether they are in coastal areas or near rivers, these areas, which are inherently highly artificial and therefore poorly resistant to rising water levels, require different types of numerical models for flood risk: accurate (and potentially costly) models for land use planning, but also fast models, which can be run in real time, for crisis management.

Modelling and risk assessment are at the heart of environmental science. Whether the events considered are of natural or anthropogenic origin, their economic, ecological or human impacts are too important to be neglected. By definition, the more extreme an event is, the lower its frequency of occurrence and therefore the less data available to characterize it. Hence the importance of using statistical tools dedicated to the modelling of extreme events, in order to provide risk management tools that are as safe and effective as possible.

As for all Inria teams, the many calculations we perform (on our personal computers or on dedicated clusters) do have an environmental cost. This cost is linked both to the resources needed to manufacture the machines we use, and to the energy consumed to run them.

LEMON members are aware of the climate emergency and are participating in actions on this subject. For example, Pascal Finaud-Guyot is involved in the "sustainable development and social responsibility" working group at Polytech Montpellier and in "energy footprint reduction" working group at HSM with Carole Delenne.

Our research activities have a twofold impact in terms of environmental responsibility:

TsunamiLab is an interactive tsunami simulation and visualization platform that teaches and raises awareness about tsunamis through interactive experiences. It allows science communicators, teachers, students and science enthusiasts to create virtual tsunamis or recreate historical tsunamis, and study their features in various digital and augmented reality formats.

TsunamiLab-Pool: Using cameras and projectors, the "pool" format allows children and adults to interact with their own hands, gathered around the circular screen. This allows the instructor to teach and engage several children simultaneously, in a way that is entertaining for all.

Web Platform: The platform's website allows anyone to simulate historical tsunamis, observe how they propagated in the ocean, and test what would have happened if they had been of greater or lesser magnitude.

Hologram: Through a prism, a holographic image makes it possible to observe the impact in different parts of the world at the same time.

Large Touch Screen: Support for large touch screens allows teachers to observe and explain phenomena in an engaging way in front of a group of students.

We use the Richelieu district of the city of Nïmes, in the south of France to illustrate our work, taking as reference the 1988 event. We couple hydraulic and economic models to simulate water depths following i) a classical approach of building treatment and ii) an alternative approach explicitly taking into account street-building flow exchanges. The simulated water depths are then fed to the economic model, allowing us to determine, by comparing scenarios, possible bias and its magnitude. The Preliminary results show significant differences in water levels inside buildings compared to outside. In terms of damage, at the district level, not taking into account street-building flows leads to an overestimation of material damage. These results invite us to carry out complementary analyses at higher levels of resolution, by considering the dynamics of the flows inside the buildings and their repercussion in terms of material damage but also of danger.

Simulated free surface transients in periodic urban layouts have been reported to be self-similar in the space-time domain when averaged on the scale of the building period. Such self-similarity is incompatible with the head loss model formulae used in most porosity-based shallow water models. Verifying it experimentally is thus of salient importance. In 6, new dam-break flow laboratory experiments are reported, where two different configurations of idealized periodic buildings layouts are explored. A space-time analysis of the experimental water level fields validates the self-similar character of the flow. Simulating the experiment using the two-dimensional shallow water model also yields self-similar period-averaged flow solutions. Then, the Single Porosity (SP), Integral Porosity (IP) and Dual Integral Porosity (DIP) models are applied. Although all three models behave in a similar fashion when the storage and connectivity porosities are close to each other, the DIP model is the one that upscales best the refined 2D solution.

During K. Bakong's internship, supported by IRT Saint-Exupery, we considered downscaling algorithms for shallow water flows thanks to artificial intelligence techniques. Neural networks and boosted trees are used for the simulation of high resolution flow variables computed from low resolution inputs. Various numerical configurations are addressed, with or without using principle component analysis to reduce the computational coast of training and forecasting steps.

This work has been published in 2.

In 1, we propose and evaluate a modelling framework based on the shallow water 2D model with depth-dependent porosity enabling to represent floodplain and riverbed topography through porosity functions. To enable a careful and meaningful evaluation of the model, we set up a 2D classical model and use it as a benchmark. We also exploit ground truth data and remote sensing derived flood inundation maps to evaluate the proposed modelling framework and use as test cases the 2007 and 2012 flood events of the river Severn. Our empirical results demonstrate a high performance and low computational cost of the proposed model for fast flood simulations at a large scale.

In 7 we focus on solute transport behaviour in a Model Heterogeneous Porous Medium (MHPM) under different flow rates. We report tracer experiments under stationary hydraulic conditions, with 7 different stationary flow rates spanning two orders of magnitude. Several replicates are carried out on several MHPMs, allowing for a sound statistical assessment of experimental imprecision. The experimental BreakThrough Curves (BTCs) exhibit a dual transport mode in agreement with previously-reported field scale experiments. This dual transport mode is shown to be flow-rate independent under a suitable variable change, with the BTCs superposing in the limit of experimental uncertainty. The experiment is modelled using a classical Multi-Region Advection-Dispersion (MRAD) model with only two mobile regions. We present a flow rate independent reformulation of the MRAD model that that allows both water and solute continuity to be preserved during the calibration process. Assuming a linear dependence of the dispersion and exchange coefficients on the flow rate is shown to yield satisfactory model behaviour. This confirms the linearity of the dispersion coefficient with respect to the flow rate, often suggested in the literature, over a wide range of flow conditions.

A typical karst aquifer configuration is the multiple conduit structure. However, it remains to be investigated how the aperture distributions and the flow rate should influence the transport process in the multiple conduit structures. To better understand the transport process in the multiple conduit structures,11 lab-scale dual-conduit structures are manufactured by varying the apertures of the two conduits (h1 and h2 denote the aperture width of the shorter conduit and the longer conduit respectively). Solute transport experiments of three different flow rates are conducted on these structures. As the flow rate increases, the dual-conduit structures are more likely to present dual-peaked BTCs. The 11 structures make one exhaustive representation of the possible aperture combination of the dual-conduit structures and the transport experiments have been conducted by three flow rates (varying by 2 degrees of magnitude), so the experimental results constitute a detailed material that should improve the understanding of transport processes in such structures. In 8, two numerical models, Weighted Sum Advection–Dispersion Equation (WSADE) and Dual Region Mobile Immobile Model (DRMIM), are applied to fit the experimental BTCs in order to obtain some insight into the actual solute-transport processes by exploring the calibrated model parameters. Considering the possible effect of solute detention, we initially applied the DRMIM model. This DRMIM better replicated the experimental BTCs than the WSADE. This study suggests the karst community shall take the DRMIM as one candidate transport model for characterizing the dual-peaked BTCs obtained in karst aquifers.

The modelling of dependence between maxima is an important subject in several applications in risk analysis. To this aim, the extreme value copula function, characterised via the madogram, can be used as a margin-free description of the dependence structure. From a practical point of view, the family of extreme value distributions is very rich and arise naturally as the limiting distribution of properly normalised component-wise maxima. We investigate the nonparametric estimation of the madogram where data are completely missing at random. We provide the functional central limit theorem for the considered multivariate madrogram correctly normalized, towards a tight Gaussian process for which the covariance function depends on the probabilities of missing. Explicit formula for the asymptotic variance is also given. Our results are illustrated in a finite sample setting with a simulation study. This work is published in 4 and presented in 9.

The concept of sparse regular variation introduced in 24 allows to infer the tail dependence of a random vector. This approach relies on the Euclidean projection onto the simplex which better exhibits the sparsity structure of the tail of the random vector than the standard methods. We develop a procedure based on a rigorous methodology aims at capturing clusters of extremal coordinates of this vector. It also includes the identification of the threshold above which the values taken by X are considered as extreme. We provide an efficient and scalable algorithm called MUSCLE which is applied to numerical examples and real-world data, such as wind speed data.

Count data are omnipresent in many applied fields, often with overdispersion due to an excess of zeroes or extreme values. With mixtures of Poisson distributions representing an elegant and appealing modelling strategy, we focus in 25 on the challenging problem of identifying a suitable mixing distribution and study how extreme value theory can be used. We propose an original strategy to select the most appropriate candidate among three categories: Fréchet, Gumbel and pseudo-Gumbel. Such an approach is presented with the aid of a decision tree and evaluated with numerical simulations.

A flexible multivariate threshold exceedances modeling is defined based on componentwise ratios between any two independent random vectors with exponential and Gamma marginal distributions. This construction allows flexibility in terms of extremal bivariate dependence. More precisely, asymptotic dependence and independence are possible, as well as hybrid situations. Two useful parametric model classes will be presented. One of the two, based on Gamma convolution models, will be illustrated through a simulation study. Good performance is shown for likelihood-based estimation of summaries of bivariate extremal dependence for several scenarii. This work has been presented in 15 and is submitted in 19.

In a recently submitted paper 22, we derive transmission operators for coupling linear Green-Naghdi equations (LGNE) with linear shallow water equations (LSWE) - the heterogeneous case - or for coupling LGNE with LGNE - the homogeneous case. We derive them from a domain decomposition method (Neumann-Dirichlet) of the linear Euler equations by applying the same vertical-averaging process and truncation of the asymptotic expansion of the velocity field used in the derivation of the equations. We find that the new asymptotic transmision conditions also correspond to Neumann and Dirichlet operators. In the homogeneous case the method has the same convergence condition as the parent domain decomposition method but leads to a solution that is different from the monodomain solution due to a term of order one. In the heterogeneous case the Neumann-Dirichlet operators translate into a simple interpolation across the interface, with an extra term of order 2 in space. We show numerically that in this case the method introduces oscillations whose amplitude grows as the mesh is refined, thus leading to an unstable scheme.

This path for research was initiated several years ago in collaboration with J. S. Bailly (LISAH). We recently came up with a methodology developed on synthetic data and published in Geosciences and Remote Sensing Letters 5. Since several decades, it becomes possible to delineate waterbodies and their dynamics from optical or radar images, that are now available at high spatial and temporal resolutions. We present here an interpolation approach that takes benefits from this waterbodies delineation which consist in endoreic areas, in isovalue contourlines to improve topography estimation classically obtained from measurement points only. The approach, based on a minimisation problem, uses Thin Plate Splin interpolation functions, whose coefficient are determined along with the unknown water level of each curve. Results obtained on a generated topography show that this approach, applied with three contour-line curves, yields a lower root mean square error with only one measurement point as the one obtained with nine points and the classical approach. The real-case application to the Rascaillan pond, was made possible by the collaboration with Olivier Boutron from Tour du Valat and Renaud Hostache from LIST. It shows that this approach can easily be implemented using widely available data such as Sentinel images, yielding a large decrease of the root mean square error between the interpolated topography and the reference Lidar acquisition, even using only two satellite images.

In 3 we propose a generic data modelling for the fusion of sewerage networks data. Our meta-model supports imperfection modelling at data-source level as well as at network object position and attribute levels, allowing thus formal fusion operations to be conducted efficiently and reliably. To validate our meta-model, we implemented it using data analysis and reengineering platform called Moose, and we conducted a test on the town of Prades-le-Lez (France). We took into account three data-sources providing information on the node positions of the sewerage network : i)the official network map as semi-structured source; ii)a high resolution aerial image database and iii)a Google Street View database as unstructured sources. As result, we were able to reliably perform data monitoring and visualization requests on real heterogeneous multi-source data related to a specific sewerage network.

In 23, a large CFL algorithm is presented for the explicit, finite volume solution of hyperbolic systems of conservation laws. The Riemann problems used in the flux computation are determined using averaging kernels that extend over several computational cells. The usual Courant-Friedrichs-Lewy stability constraint is replaced with a constraint involving the kernel support size. This makes the method unconditionally stable with respect to the size of the computational cells, allowing the computational mesh to be refined locally to an arbitrary degree without altering solution stability. The practical implementation of the method is detailed for the shallow water equations with topographical source term. Computational examples report applications of the method to the linear advection, Burgers and shallow water equations. In the case of sharp bottom discontinuities, the need for improved, well-balanced discretizations of the geometric source term is acknowledged.

The research collaboration convention, signed with Berger-Levrault company in the framework of Yassine Belghaddar thesis (CIFRE ANRT France/Maroc), has ended in December 2022.

The research collaboration convention, signed with Luxembourg Institute of Science and Technology (LIST) in the framework of Vita Ayoub thesis (CASCADE Project), has ended in November 2022.

Pascal Finaud-Guyot and Antoine Rousseau are members of ANR MUFFINS - MUltiscale Flood Forecasting with INnovating Solutions - led by Pierre-André Garambois (INRAe) including the following partners: IMT, Univ Eiffel, Cerema IMFT, CCR, Météo/SPCME, SCHAPI.

The objective of the MUFFINS project is to develop new accurate and computationally efficient flood forecasting approaches, enabling transferring information between modelings (meteo-hydrology-hydraulic-damage) and scales (from local runoff generation over areas lesser than 1km2 to flood propagation on catchments of thousands km2), and taking advantage of innovative data (in situ, remote observation, opportunistic) to reduce forecasts uncertainties.

5 UM-affiliated members of LEMON are Academics, for a total teaching load of approximately 1000 hrs/year.

Moreover, these members undertook significant administrative duties (approx. 1000 hrs) in 2021: