The general scope of the AIRSEA project-team is to develop *mathematical and computational methods for the modeling of oceanic and atmospheric flows*.
The mathematical tools used involve both *deterministic and statistical approaches*. The main research topics cover a) modeling and coupling b) model reduction for sensitivity analysis, coupling and multiscale optimizations c) sensitivity analysis, parameter estimation and risk assessment d) algorithms for high performance computing. The range of application is from climate modeling to the prediction of extreme events.

Recent events have raised questions regarding the social and economic implications of anthropic alterations of the Earth system, i.e. climate change and the associated risks of increasing extreme events. Ocean and atmosphere, coupled with other components (continent and ice) are the building blocks of the Earth system. A better understanding of the ocean atmosphere system is a key ingredient for improving prediction of such events. Numerical models are essential tools to understand processes, and simulate and forecast events at various space and time scales. Geophysical flows generally have a number of characteristics that make it difficult to model them. This justifies the development of specifically adapted mathematical methods:

Geophysical flows are strongly non-linear. Therefore, they exhibit interactions between different scales, and unresolved small scales (smaller than mesh size) of the flows have to be **parameterized** in the equations.

Geophysical fluids are non closed systems. They are open-ended in their scope for including and dynamically coupling different physical processes (e.g., atmosphere, ocean, continental water, etc). **Coupling** algorithms are thus of primary importance to account for potentially significant feedback.

Numerical models contain parameters which cannot be estimated accurately either because they are difficult to measure or because they represent some poorly known subgrid phenomena. There is thus a need for **dealing with uncertainties**. This is further complicated by the turbulent nature of geophysical fluids.

The computational cost of geophysical flow simulations is huge, thus requiring the use of **reduced models, multiscale methods** and the design of algorithms ready for **high performance computing** platforms.

Our scientific objectives are divided into four major points. The first objective focuses on developing advanced mathematical methods for both the ocean and atmosphere, and the coupling of these two components. The second objective is to investigate the derivation and use of model reduction to face problems associated with the numerical cost of our applications. The third objective is directed toward the management of uncertainty in numerical simulations. The last objective deals with efficient numerical algorithms for new computing platforms. As mentioned above, the targeted applications cover oceanic and atmospheric modeling and related extreme events using a hierarchy of models of increasing complexity.

Current numerical oceanic and atmospheric models suffer from a number of well-identified problems. These problems are mainly related to lack of horizontal and vertical resolution, thus requiring the parameterization of unresolved (subgrid scale) processes and control of discretization errors in order to fulfill criteria related to the particular underlying physics of rotating and strongly stratified flows. Oceanic and atmospheric coupled models are increasingly used in a wide range of applications from global to regional scales. Assessment of the reliability of those coupled models is an emerging topic as the spread among the solutions of existing models (e.g., for climate change predictions) has not been reduced with the new generation models when compared to the older ones.

**Advanced methods for modeling 3D rotating and stratified flows**
The continuous increase of computational power and the resulting finer grid resolutions have triggered a recent regain of interest in numerical methods and their relation to physical processes. Going beyond present knowledge requires a better understanding of numerical dispersion/dissipation ranges and their connection to model fine scales. Removing the leading order truncation error of numerical schemes is thus an active topic of research and each mathematical tool has to adapt to the characteristics of three dimensional stratified and rotating flows. Studying the link between discretization errors and subgrid scale parameterizations is also arguably one of the main challenges.

Complexity of the geometry, boundary layers, strong stratification and lack of resolution are the main sources of discretization errors in the numerical simulation of geophysical flows. This emphasizes the importance of the definition of the computational grids (and coordinate systems) both in horizontal and vertical directions, and the necessity of truly multi resolution approaches. At the same time, the role of the small scale dynamics on large scale circulation has to be taken into account. Such parameterizations may be of deterministic as well as stochastic nature and both approaches are taken by the AIRSEA team. The design of numerical schemes consistent with the parameterizations is also arguably one of the main challenges for the coming years. This work is complementary and linked to that on parameters estimation described in .

**Ocean Atmosphere interactions and formulation of coupled models**
State-of-the-art climate models (CMs) are complex systems under continuous development. A fundamental aspect of climate modeling is the representation of air-sea interactions. This covers a large range of issues: parameterizations of atmospheric and oceanic boundary layers, estimation of air-sea fluxes, time-space numerical schemes, non conforming grids, coupling algorithms ...Many developments related to these different aspects were performed over the last 10-15 years, but were in general conducted independently of each other.

The aim of our work is to revisit and enrich several aspects of the representation of air-sea interactions in CMs, paying special attention to their overall consistency with appropriate mathematical tools. We intend to work consistently on the physics and numerics. Using the theoretical framework of global-in-time Schwarz methods, our aim is to analyze the mathematical formulation of the parameterizations in a coupling perspective. From this study, we expect improved predictability in coupled models (this aspect will be studied using techniques described in ). Complementary work on space-time nonconformities and acceleration of convergence of Schwarz-like iterative methods (see ) are also conducted.

The high computational cost of the applications is a common and major concern to have in mind when deriving new methodological approaches. This cost increases dramatically with the use of sensitivity analysis or parameter estimation methods, and more generally with methods that require a potentially large number of model integrations.

A dimension reduction, using either stochastic or deterministic methods, is a way to reduce significantly the number of degrees of freedom, and therefore the calculation time, of a numerical model.

**Model reduction**
Reduction methods can be deterministic (proper orthogonal decomposition, other reduced bases) or stochastic (polynomial chaos, Gaussian processes, kriging), and both fields of research are very active. Choosing one method over another strongly depends on the targeted application, which can be as varied as real-time computation, sensitivity analysis (see e.g., section ) or optimisation for parameter estimation (see below).

Our goals are multiple, but they share a common need for certified error bounds on the output. Our team has a 4-year history of working on certified reduction methods and has a unique positioning at the interface between deterministic and stochastic approaches. Thus, it seems interesting to conduct a thorough comparison of the two alternatives in the context of sensitivity analysis. Efforts will also be directed toward the development of efficient greedy algorithms for the reduction, and the derivation of goal-oriented sharp error bounds for non linear models and/or non linear outputs of interest. This will be complementary to our work on the deterministic reduction of parametrized viscous Burgers and Shallow Water equations where the objective is to obtain sharp error bounds to provide confidence intervals for the estimation of sensitivity indices.

**Reduced models for coupling applications**
Global and regional high-resolution oceanic models are either coupled to an atmospheric model
or forced at the air-sea interface by fluxes computed empirically preventing proper physical
feedback between the two media. Thanks to high-resolution observational studies, the existence of air-sea
interactions at oceanic mesoscales (i.e., at

Multiphysics coupling often requires iterative methods to obtain a mathematically correct numerical solution. To mitigate the cost of the iterations, we will investigate the possibility of using reduced-order models for the iterative process. We will consider different ways of deriving a reduced model: coarsening of the resolution, degradation of the physics and/or numerical schemes, or simplification of the governing equations. At a mathematical level, we will strive to study the well-posedness and the convergence properties when reduced models are used. Indeed, running an atmospheric model at the same resolution as the ocean model is generally too expensive to be manageable, even for moderate resolution applications. To account for important fine-scale interactions in the computation of the air-sea boundary condition, the objective is to derive a simplified boundary layer model that is able to represent important 3D turbulent features in the marine atmospheric boundary layer.

**Reduced models for multiscale optimization**
The field of multigrid methods for optimisation has known a tremendous development over the past few decades. However, it has not been applied to oceanic and atmospheric problems apart from some crude (non-converging) approximations or applications to simplified and low dimensional models. This is mainly due to the high complexity of such models and to the difficulty in handling several grids at the same time. Moreover, due to complex boundaries and physical phenomena, the grid interactions and transfer operators are not trivial to define.

Multigrid solvers (or multigrid preconditioners) are efficient methods for the solution of variational data assimilation problems. We would like to take advantage of these methods to tackle the optimization problem in high dimensional space. High dimensional control space is obtained when dealing with parameter fields estimation, or with control of the full 4D (space time) trajectory. It is important since it enables us to take into account model errors. In that case, multigrid methods can be used to solve the large scales of the problem at a lower cost, this being potentially coupled with a scale decomposition of the variables themselves.

There are many sources of uncertainties in numerical models. They are due to imperfect external forcing, poorly known parameters, missing physics and discretization errors. Studying these uncertainties and their impact on the simulations is a challenge, mostly because of the high dimensionality and non-linear nature of the systems. To deal with these uncertainties we work on three axes of research, which are linked: sensitivity analysis, parameter estimation and risk assessment. They are based on either stochastic or deterministic methods.

**Sensitivity analysis**
Sensitivity analysis (SA), which links uncertainty in the model inputs to uncertainty in the model outputs, is a powerful tool for model design and validation. First, it can be a pre-stage for parameter estimation (see ), allowing for the selection of the more significant parameters. Second, SA permits understanding and quantifying (possibly non-linear) interactions induced by the different processes defining e.g., realistic ocean atmosphere models. Finally SA allows for validation of models, checking that the estimated sensitivities are consistent with what is expected by the theory.
On ocean, atmosphere and coupled systems, only first order deterministic SA are performed, neglecting the initialization process (data assimilation). AIRSEA members and collaborators proposed to use second order information to provide consistent sensitivity measures, but so far it has only been applied to simple academic systems. Metamodels are now commonly used, due to the cost induced by each evaluation of complex numerical models: mostly Gaussian processes, whose probabilistic framework allows for the development of specific adaptive designs, and polynomial chaos not only in the context of intrusive Galerkin approaches but also in a black-box approach. Until recently, global SA was based primarily on a set of engineering practices. New mathematical and methodological developments have led to the numerical computation of Sobol' indices, with confidence intervals assessing for both metamodel and estimation errors. Approaches have also been extended to the case of dependent entries, functional inputs and/or output and stochastic numerical codes. Other types of indices and generalizations of Sobol' indices have also been introduced.

Concerning the stochastic approach to SA we plan to work with parameters that show spatio-temporal dependencies and to continue toward more realistic applications where the input space is of huge dimension with highly correlated components. Sensitivity analysis for dependent inputs also introduces new challenges. In our applicative context, it would seem prudent to carefully learn the spatio-temporal dependences before running a global SA. In the deterministic framework we focus on second order approaches where the sought sensitivities are related to the optimality system rather than to the model; i.e., we consider the whole forecasting system (model plus initialization through data assimilation).

All these methods allow for computing sensitivities and more importantly a posteriori error statistics.

**Parameter estimation**
Advanced parameter estimation methods are barely used in ocean, atmosphere and coupled systems, mostly due to a difficulty of deriving adequate response functions, a lack of knowledge of these methods in the ocean-atmosphere community, and also to the huge associated computing costs. In the presence of strong uncertainties on the model but also on parameter values, simulation and inference are closely associated. Filtering for data assimilation and Approximate Bayesian Computation (ABC) are two examples of such association.

Stochastic approach can be compared with the deterministic approach, which allows to determine the sensitivity of the flow to parameters and optimize their values relying on data assimilation. This approach is already shown to be capable of selecting a reduced space of the most influent parameters in the local parameter space and to adapt their values in view of correcting errors committed by the numerical approximation. This approach assumes the use of automatic differentiation of the source code with respect to the model parameters, and optimization of the obtained raw code.

AIRSEA assembles all the required expertise to tackle these difficulties. As mentioned previously, the choice of parameterization schemes and their tuning has a significant impact on the result of model simulations. Our research will focus on parameter estimation for parameterized Partial Differential Equations (PDEs) and also for parameterized Stochastic Differential Equations (SDEs). Deterministic approaches are based on optimal control methods and are local in the parameter space (i.e., the result depends on the starting point of the estimation) but thanks to adjoint methods they can cope with a large number of unknowns that can also vary in space and time. Multiscale optimization techniques as described in will be one of the tools used. This in turn can be used either to propose a better (and smaller) parameter set or as a criterion for discriminating parameterization schemes. Statistical methods are global in the parameter state but may suffer from the curse of dimensionality. However, the notion of parameter can also be extended to functional parameters. We may consider as parameter a functional entity such as a boundary condition on time, or a probability density function in a stationary regime. For these purposes, non-parametric estimation will also be considered as an alternative.

**Risk assessment**
Risk assessment in the multivariate setting suffers from a lack of consensus on the choice of indicators. Moreover, once the indicators are designed, it still remains to develop estimation procedures, efficient even for high risk levels. Recent developments for the assessment of financial risk have to be considered with caution as methods may differ pertaining to general financial decisions or environmental risk assessment. Modeling and quantifying uncertainties related to extreme events is of central interest in environmental sciences. In relation to our scientific targets, risk assessment is very important in several areas: hydrological extreme events, cyclone intensity, storm surges...Environmental risks most of the time involve several aspects which are often correlated. Moreover, even in the ideal case where the focus is on a single risk source, we have to face the temporal and spatial nature of environmental extreme events.
The study of extremes within a spatio-temporal framework remains an emerging field where the development of adapted statistical methods could lead to major progress in terms of geophysical understanding and risk assessment thus coupling data and model information for risk assessment.

Based on the above considerations we aim to answer the following scientific questions: how to measure risk in a multivariate/spatial framework? How to estimate risk in a non stationary context? How to reduce dimension (see ) for a better estimation of spatial risk?

Extreme events are rare, which means there is little data available to make inferences of risk measures. Risk assessment based on observation therefore relies on multivariate extreme value theory. Interacting particle systems for the analysis of rare events is commonly used in the community of computer experiments. An open question is the pertinence of such tools for the evaluation of environmental risk.

Most numerical models are unable to accurately reproduce extreme events. There is therefore a real need to develop efficient assimilation methods for the coupling of numerical models and extreme data.

Methods for sensitivity analysis, parameter estimation and risk assessment are extremely costly due to the necessary number of model evaluations. This number of simulations require considerable computational resources, depends on the complexity of the application, the number of input variables and desired quality of approximations. To this aim, the AIRSEA team is an intensive user of HPC computing platforms, particularly grid computing platforms. The associated grid deployment has to take into account the scheduling of a huge number of computational requests and the links with data-management between these requests, all of these as automatically as possible. In addition, there is an increasing need to propose efficient numerical algorithms specifically designed for new (or future) computing architectures and this is part of our scientific objectives. According to the computational cost of our applications, the evolution of high performance computing platforms has to be taken into account for several reasons. While our applications are able to exploit space parallelism to its full extent (oceanic and atmospheric models are traditionally based on a spatial domain decomposition method), the spatial discretization step size limits the efficiency of traditional parallel methods. Thus the inherent parallelism is modest, particularly for the case of relative coarse resolution but with very long integration time (e.g., climate modeling). Paths toward new programming paradigms are thus needed. As a step in that direction, we plan to focus our research on parallel in time methods.

**New numerical algorithms for high performance computing**
Parallel in time methods can be classified into three main groups. In the first group, we find methods using parallelism across the method, such as parallel integrators for ordinary differential equations. The second group considers parallelism across the problem. Falling into this category are methods such as waveform relaxation
where the space-time system is decomposed into a set of subsystems which can then be solved independently using some form of relaxation techniques or multigrid reduction in time.
The third group of methods focuses on parallelism across the steps. One of the best known algorithms in this family is parareal.
Other methods combining the strengths of those listed above (e.g., PFASST) are currently under investigation in the community.

Parallel in time methods are iterative methods that may require a large number of iteration before convergence. Our first focus will be on the convergence analysis of parallel in time (Parareal / Schwarz) methods for the equation systems of oceanic and atmospheric models. Our second objective will be on the construction of fast (approximate) integrators for these systems. This part is naturally linked to the model reduction methods of section (). Fast approximate integrators are required both in the Schwarz algorithm (where a first guess of the boundary conditions is required) and in the Parareal algorithm (where the fast integrator is used to connect the different time windows). Our main application of these methods will be on climate (i.e., very long time) simulations. Our second application of parallel in time methods will be in the context of optimization methods. In fact, one of the major drawbacks of the optimal control techniques used in is a lack of intrinsic parallelism in comparison with ensemble methods. Here, parallel in time methods also offer ways to better efficiency. The mathematical key point is centered on how to efficiently couple two iterative methods (i.e., parallel in time and optimization methods).

The evolution of natural systems, in the short, mid, or long term, has extremely important consequences for both the global Earth system and humanity. Forecasting this evolution is thus a major challenge from the scientific, economic, and human viewpoints.

Humanity has to face the problem of **global warming**, brought on by the
emission of greenhouse gases from human activities. This warming will probably cause huge changes at global and regional
scales, in terms of climate, vegetation and biodiversity, with major consequences for local populations.
Research has therefore been conducted over the past 15 to 20 years in an effort to
model the Earth's climate and forecast its evolution in the 21st century in response to anthropic
action.

With regard to short-term forecasts, the best and oldest example is of course **weather forecasting**.
Meteorological services have been providing daily short-term forecasts for several decades which are of
crucial importance for numerous human activities.

Numerous other problems can also be mentioned, like **seasonal weather
forecasting** (to enable powerful phenomena like an El Ni**operational oceanography** (short-term forecasts of the evolution of the ocean system to provide services for the fishing industry, ship routing, defense, or the fight against marine pollution) or the prediction of **floods**.

As mentioned previously, mathematical and numerical tools are omnipresent and play a fundamental role in these areas of research. In this context, the vocation of AIRSEA is not to carry out numerical prediction, but to address mathematical issues raised by the development of prediction systems for these application fields, in close collaboration with geophysicists.

In collaboration with M. Asch and M. Bocquet, M. Nodet published a book about Data Assimilation .

Jose R. Leon was granted by an International Inria Chair.

E. Arnaud was granted by a CRCT (Congé pour recherches ou conversions thématiques) by the CNU.

L. Debreu was awarded IMarEST Deny Medal for the best paper in journal of operational oceanography for year 2014.

Adaptive Grid Refinement In Fortran

Functional Description AGRIF is a Fortran 90 package for the integration of full adaptive mesh refinement (AMR) features within a multidimensional finite difference model written in Fortran. Its main objective is to simplify the integration of AMR potentialities within an existing model with minimal changes. Capabilities of this package include the management of an arbitrary number of grids, horizontal and/or vertical refinements, dynamic regridding, parallelization of the grids interactions on distributed memory computers. AGRIF requires the model to be discretized on a structured grid, like it is typically done in ocean or atmosphere modelling.

Participants: Laurent Debreu, Marc Honnorat and Cyril Mazauric

Contact: Laurent Debreu

Bilbliothèque d’Assimilation Lagrangienne Adaptée aux Images Séquencées en Environnement

Keywords: Multi-scale analysis - Data assimilation - Optimal control

Functional Description

BALAISE (Bilbliothèque d’Assimilation Lagrangienne Adaptée aux Images Séquencées en Environnement) is a test bed for image data assimilation. It includes a shallow water model, a multi-scale decomposition library and an assimilation suite.

Contact: Arthur Vidard

Designs of Computer Experiments

Functional Description

This package is useful for conducting design and analysis of computer experiments.

Contact: Céline Hartweg

URL: https://

Construction and Evaluation of Metamodels

Functional Description

This package is useful for conducting design and analysis of computer experiments. Estimation, validation and prediction of models of different types : linear models, additive models, MARS,PolyMARS and Kriging.

Contact: Céline Hartweg

URL: https://

Variational data assimilation for NEMO

Keywords: Oceanography - Data assimilation - Adjoint method - Optimal control

Functional Description

NEMOVAR is a state-of-the-art multi-incremental variational data assimilation system with both 3D and 4D capabilities, and which is designed to work with NEMO on the native ORCA grids. The background error covariance matrix is modelled using balance operators for the multivariate component and a diffusion operator for the univariate component. It can also be formulated as a linear combination of covariance models to take into account multiple correlation length scales associated with ocean variability on different scales. NEMOVAR has recently been enhanced with the addition of ensemble data assimilation and multi-grid assimilation capabilities. It is used operationnaly in both ECMWF and the Met Office (UK)

Partners: CERFACS - ECMWF - Met Office

Contact: Arthur Vidard

Functional Description

This package is useful for conducting sensitivity analysis of complex computer codes.

Contact: Laurent Gilquin

URL: https://

The increase of model resolution naturally leads to the representation of a wider energy spectrum. As a result, in recent years, the understanding of oceanic submesoscale dynamics has significantly improved. However, dissipation in submesoscale models remains dominated by numerical constraints rather than physical ones. Effective resolution is limited by the numerical dissipation range, which is a function of the model numerical filters (assuming that dispersive numerical modes are efficiently removed). In , we present a Baroclinic Jet test case set in a zonally reentrant channel that provides a controllable test of a model capacity at resolving submesoscale dynamics. We compare simulations from two models, ROMS and NEMO, at different mesh sizes (from 20 to 2 km). Through a spectral decomposition of kinetic energy and its budget terms, we identify the characteristics of numerical dissipation and effective resolution. It shows that numerical dissipation appears in different parts of a model, especially in spatial advection-diffusion schemes for momentum equations (KE dissipation) and tracer equations (APE dissipation) and in the time stepping algorithms. Effective resolution, defined by scale-selective dissipation, is inadequate to qualify traditional ocean models with low-order spatial and temporal filters, even at high grid resolution. High- order methods are better suited to the concept and probably unavoidable. Fourth-order filters are suited only for grid resolutions less than a few kilometers and momentum advection schemes of even higher-order may be justified. The upgrade of time stepping algorithms (from filtered Leapfrog), a cumbersome task in a model, appears critical from our results, not just as a matter of model solution quality but also of computational efficiency (extended stability range of predictor-corrector schemes). Effective resolution is also shaken by the need for non scale-selective barotropic mode filters and requires carefully addressing the issue of mode splitting errors. Possibly the most surprising result is that submesoscale energy production is largely affected by spurious diapycnal mixing (APE dissipation). This result justifies renewed efforts in reducing tracer mixing errors and poses again the question of how much vertical diffusion is at work in the real ocean.

The coupling of models of different kinds is gaining more and more attention, due in particular to a need for more global modeling systems encompassing different disciplines (e.g. multi-physics) and different approaches (e.g. multi-scale, nesting). In order to develop such complex systems, it is generally more pragmatic to assemble different modeling units inside a user friendly modelling software platform rather than to develop new complex global models.

In the context of hydrodynamics, global modeling systems have to couple models of different dimensions (1D, 2D or 3D) and representing different physics (Navier-Stokes, hydrostatic Navier-Stokes, shallow water…). We have been developing coupling approaches for several years, based on so-called Schwarz algorithms. Our recent contributions address the development of absorbing boundary conditions for Navier-Stokes equations , and of interface conditions for coupling hydrostatic and nonhydrostatic Navier-Stokes flows . In the context of our partnership with ARTELIA Group (PhD thesis of Medhi Pierre Daou), implementations of Schwarz coupling algorithms have been performed for hydrodynamics industrial codes (Mascaret, Telemac and OpenFoam), using the PALM coupling software. A first series of experiments was realized in an academic test case, and a second one in the much more realistic context of the Rusumo hydropower plant, coupling Telemac-3D (Navier-Stokes equations) with OpenFoam (diphasic solver) - see Figure . M.-P. Daou defended his PhD on September 27, 2016 .

Coupling methods routinely used in regional and global climate models do not provide the exact solution to the ocean-atmosphere problem, but an approximation of one . For the last few years we have been actively working on the analysis of Schwarz waveform relaxation to apply this type of iterative coupling method to air-sea coupling , , . In the context of the simulation of tropical cyclone, sensitivity tests to the coupling method have been carried out using ensemble simulations (through perturbations of the coupling frequency and initial conditions). We showed that the use of the Schwarz iterative coupling methods leads to a significantly reduced spread in the ensemble results (in terms of cyclone trajectory and intensity), thus suggesting that a source of error is removed w.r.t coupling methods en vogue in existing coupled models .

Motivated by this encouraging result, our activities over the last few years can be divided into four general topics

*Stability and consistency analysis of existing coupling methods*: in we showed that
the usual methods used in the context of ocean-atmosphere coupling are prone to splitting errors because they correspond
to only one iteration of an iterative process without reaching convergence. Moreover, those methods have an additional condition
for the coupling to be stable even if unconditionally stable time stepping algorithms are used. This last remark was further studied
last year in and it turned out to be a major source of instability in atmosphere-snow coupling.

*Study of physics-dynamics coupling*: during the PhD-thesis of Charles Pelletier (funded by Inria) the scope is on
including the formulation of physical parameterizations in the theoretical analysis of the coupling, in particular
the parameterization schemes to compute air-sea fluxes. To do so, a metamodel representative of the behavior of the full parameterization
but with a continuous form easier to manipulate has been derived thanks to a sensitivity analysis based on Sobol' indexes
This metamodel has the advantage to be more adequate to conduct the mathematical analysis of the coupling while being physically satisfactory.
A publication is currently in preparation for the Quarterly Journal of the Royal Meteorological Society.
In parallel we have contributed to a general review gathering the main international specialists on the topic .

*Design of a coupled single column model*: in order to focus on specific problems of ocean-atmosphere coupling,
a work on simplified equation sets has been started. The aim is to implement a one-dimensional (in the vertical direction) coupled
model with physical parameterizations representative of those used in realistic models. Thanks to this simplified coupled model
the objective is to develop a benchmark suite for coupled models evaluation. Last year the single column oceanic and atmospheric
components have been developed in the framework of the SIMBAD project and should be coupled in early 2017 (collaboration with
Mercator-océan).

*Analysis of air-sea interactions in realistic high-resolution realistic simulations*: part of our activity has been
in collaboration with atmosphericists and physical oceanographers to study the impact on some modeling assumptions (e.g. )
in large-scale realistic ocean-atmosphere coupled simulations , .

These four topics are addressed through strong collaborations between the applied mathematics and the climate community.

Moreover a PPR (*Projet à partenariat renforcé*) called SIMBAD (SIMplified Boundary Atmospheric layer moDel for ocean modeling purposes)
is funded by Mercator-Ocean for the next three years (from march 2015 to march 2018). The aim of this project in collaboration with Meteo-France,
Ifremer, LMD, and LOCEAN is to derive a metamodel to force high-resolution oceanic operational models for which the use of a full atmospheric model
is not possible due to a prohibitive computational cost. Another industrial contract named ALBATROS is also funded by (from June 2016 to June 2019)
to couple SIMBAD with the NEMO global ocean model and a wave model called WW3.

An ANR project COCOA (COmprehensive Coupling approach for the Ocean and the Atmosphere, P.I.: E. Blayo) has been funded in 2016 and will officially start in January 2017.

In the context of operational meteorology and oceanography, forecast skills heavily rely on proper combination of model prediction and available observations via data assimilation techniques. Historically, numerical weather prediction is made separately for the ocean and the atmosphere in an uncoupled way. However, in recent years, fully coupled ocean-atmosphere models are increasingly used in operational centers to improve the reliability of seasonal forecasts and tropical cyclones predictions. For coupled problems, the use of separated data assimilation schemes in each medium is not satisfactory since the result of such assimilation process is generally inconsistent across the interface, thus leading to unacceptable artefacts. Hence, there is a strong need for adapting existing data assimilation techniques to the coupled framework. As part of our ERACLIM2 contribution, R. Pellerej started a PhD on that topic late 2014. So far, three general data assimilation algorithms, based on variational data assimilation techniques, have been developed and applied to a simple coupled problem. The dynamical equations of the considered problem are coupled using an iterative Schwarz domain decomposition method. The aim is to properly take into account the coupling in the assimilation process in order to obtain a coupled solution close to the observations while satisfying the physical conditions across the air-sea interface. Preliminary results shows significant improvement compared to the usual approach on this simple system .

The aforementioned system has been recoded within the OOPS framework (Object Oriented Prediction System) in order to ease the transfer to more complex/realistic models.

Basing on the maximum entropy production principle, the influence of subgrid scales on the flow is presented as the harmonic dissipation accompanied by the backscattering of the dissipated energy. This parametrization is tested on the shallow water model in a square box. Two possible solutions of the closure problem are compared basing on the analysis of the energy dissipation-backscattering balance. Results of this model on the coarse resolution grid are compared with the reference simulation at four times higher resolution. It is shown that the mean flow is correctly recovered, as well as variability properties, such as eddy kinetic energy fields and its spectrum .

In order to lower the computational cost of the variational data assimilation process, we investigate the use of multigrid methods to solve the associated optimal control system. On a linear advection equation, we study the impact of the regularization term on the optimal control and the impact of discretization errors on the efficiency of the coarse grid correction step. We show that even if the optimal control problem leads to the solution of an elliptic system, numerical errors introduced by the discretization can alter the success of the multigrid methods. The view of the multigrid iteration as a preconditioner for a Krylov optimization method leads to a more robust algorithm. A scale dependent weighting of the multigrid preconditioner and the usual background error covariance matrix based preconditioner is proposed and brings significant improvements. This work is summarized in ().

Another point developed in the team for sensitivity analysis is model reduction. To be more precise regarding model reduction, the aim is to reduce the number of unknown variables (to be computed by the model), using a well chosen basis. Instead of discretizing the model over a huge grid (with millions of points), the state vector of the model is projected on the subspace spanned by this basis (of a far lesser dimension). The choice of the basis is of course crucial and implies the success or failure of the reduced model. Various model reduction methods offer various choices of basis functions. A well-known method is called “proper orthogonal decomposition" or “principal component analysis". More recent and sophisticated methods also exist and may be studied, depending on the needs raised by the theoretical study. Model reduction is a natural way to overcome difficulties due to huge computational times due to discretizations on fine grids. In , the authors present a reduced basis offline/online procedure for viscous Burgers initial boundary value problem, enabling efficient approximate computation of the solutions of this equation for parametrized viscosity and initial and boundary value data. This procedure comes with a fast-evaluated rigorous error bound certifying the approximation procedure. The numerical experiments in the paper show significant computational savings, as well as efficiency of the error bound.

When a metamodel is used (for example reduced basis metamodel, but
also kriging, regression,

When considering parameter-dependent PDE, it happens that the quantity of interest is not the PDE’s solution but a linear functional of it. In , we have proposed a probabilistic error bound for the reduced output of interest (goal-oriented error bound). By probabilistic we mean that this bound may be violated with small probability. The bound is efficiently and explicitly computable, and we show on different examples that this error bound is sharper than existing ones.

A collaboration has been started with Christophe Prieur (Gipsa-Lab) on the very challenging issue of sensitivity of a controlled system to its control parameters . In , we propose a generalization of the probabilistic goal-oriented error estimation in to parameter-dependent nonlinear problems. One aims at applying such results in the previous context of sensitivity of a controlled system.

Forecasting geophysical systems require complex models, which sometimes need to be coupled, and which make use of data assimilation. The objective of this project is, for a given output of such a system, to identify the most influential parameters, and to evaluate the effect of uncertainty in input parameters on model output. Existing stochastic tools are not well suited for high dimension problems (in particular time-dependent problems), while deterministic tools are fully applicable but only provide limited information. So the challenge is to gather expertise on one hand on numerical approximation and control of Partial Differential Equations, and on the other hand on stochastic methods for sensitivity analysis, in order to develop and design innovative stochastic solutions to study high dimension models and to propose new hybrid approaches combining the stochastic and deterministic methods.

Sensitivity Analysis is defined by some scalar response function giving an evaluation of the state of a system with respect to parameters. By definition, sensitivity is the gradient of this response function. In the case of Variational Data Assimilation, sensitivity analysis have to be carried out on the optimality system because this is the only system in which all the information is located. An important application is the sensitivity, for instance, of the prediction with respect to observations. It's necessary to derive the optimality system and to introduce a second order adjoint.We have applied it to a simulated pollution transport problem and in the case of an oceanic model , . More applications to water pollution using a complex hydrological model are under development.

In variance-based sensitivity analysis, a classical tool is the method
of Sobol' which allows to compute Sobol' indices using
Monte Carlo integration. One of the main drawbacks of this approach is
that the estimation of Sobol' indices requires the use of several
samples. For example, in a

In a recent work we introduce a new approach to estimate all first-order Sobol' indices by using only two samples based on replicated latin hypercubes and all second-order Sobol' indices by using only two samples based on replicated randomized orthogonal arrays. This method is referred as the replicated method. We establish theoretical properties of such a method for the first-order Sobol' indices and discuss the generalization to higher-order indices. As an illustration, we propose to apply this new approach to a marine ecosystem model of the Ligurian sea (northwestern Mediterranean) in order to study the relative importance of its several parameters. The calibration process of this kind of chemical simulators is well-known to be quite intricate, and a rigorous and robust — i.e. valid without strong regularity assumptions — sensitivity analysis, as the method of Sobol' provides, could be of great help. The computations are performed by using CIGRI, the middleware used on the grid of the Grenoble University High Performance Computing (HPC) center. We are also applying these estimates to calibrate integrated land use transport models. As for these models, some groups of inputs are correlated, Laurent Gilquin extended the approach based on replicated designs for the estimation of grouped Sobol' indices .

We can now wonder what are the asymptotic properties of these new estimators, or also of more classical ones. In , the authors deal with asymptotic properties of the estimators. In , the authors establish also a multivariate central limit theorem and non asymptotic properties.

The use of replicated designs to estimate first-order Sobol' indices has the major advantage of reducing drastically the estimation cost as the number of runs n becomes independent of the input space dimension. The generalization to closed second-order Sobol' indices relies on the replication of randomized orthogonal arrays. However, if the input space is not properly explored, that is if n is too small, the Sobol’ indices estimates may not be accurate enough.

To address this challenge, we propose approaches to render the replication method recursive, enabling the required number of evaluations to be controlled. With these approaches, more accurate Sobol’ estimates are obtained while recycling previous sets of model evaluations. The estimation procedure is therefore stopped when the convergence of estimates is considered reached. One of these approaches corresponds to a recursive version of the replication method and is based on the iterative construction of stratified designs, latin hypercubes and orthogonal arrays . A second approach combines the use of quasi-Monte Carlo sampling and the construction of a new stopping criterion , .

Extension of the replication method has also been proposed to face constraints arising in an application on the land use and transport model Tranus, such as the presence of dependency among the model inputs, as well as multivariate outputs .

An important challenge for stochastic sensitivity analysis is to develop methodologies which work for dependent inputs. For the moment, there does not exist conclusive results in that direction. Our aim is to define an analogue of Hoeffding decomposition in the case where input parameters are correlated. Clémentine Prieur supervised Gaëlle Chastaing's PhD thesis on the topic (defended in September 2013) . We obtained first results , deriving a general functional ANOVA for dependent inputs, allowing defining new variance based sensitivity indices for correlated inputs. We then adapted various algorithms for the estimation of these new indices. These algorithms make the assumption that among the potential interactions, only few are significant. Two papers have been recently accepted , . We also considered (see the paragraph ) the estimation of groups Sobol' indices, with a procedure based on replicated designs. These indices provide information at the level of groups, and not at a finer level, but their interpretation is still rigorous.

Céline Helbert and Clémentine Prieur supervised the PhD thesis of Simon Nanty (funded by CEA Cadarache, and defended in October, 2015). The subject of the thesis is the analysis of uncertainties for numerical codes with temporal and spatio-temporal input variables, with application to safety and impact calculation studies. This study implied functional dependent inputs. A first step was the modeling of these inputs . The whole methodology proposed during the PhD is presented in .

More recently, the Shapley value, from econometrics, was proposed as an alternative to quantify the importance of random input variables to a function. Owen derived Shapley value importance for independent inputs and showed that it is bracketed between two different Sobol’ indices. Song et al. recently advocated the use of Shapley value for the case of dependent inputs. In a very recent work , in collaboration with Art Owen (Standford’s University), we show that Shapley value removes the conceptual problems of functional ANOVA for dependent inputs. We do this with some simple examples where Shapley value leads to intuitively reasonable nearly closed form values.

A variational data assimilation technique is applied to the identification of the optimal boundary conditions for a simplified configuration of the NEMO model. A rectangular box model placed in mid-latitudes, and subject to the classical single or double gyre wind forcing, is studied. The model grid can be rotated on a desired angle around the center of the rectangle in order to simulate the boundary approximated by a staircase-like coastlines. The solution of the model on the grid aligned with the box borders was used as a reference solution and as artificial observational data. It is shown in , that optimal boundary has a rather complicated geometry which is neither a staircase, nor a straight line. The boundary conditions found in the data assimilation procedure bring the solution toward the reference solution allowing to correct the influence of the rotated grid.

Adjoint models, necessary to variational data assimilation, have been produced by the TAPENADE software, developed by the SCIPORT team. This software is shown to be able to produce the adjoint code that can be used in data assimilation after a memory usage optimization.

This research is the subject of a collaboration with Venezuela and is partly funded by an ECOS Nord project.

We are focusing our attention on models derived from the linear Fokker-Planck equation. From a probabilistic viewpoint, these models have received particular attention in recent years, since they are a basic example for hypercoercivity. In fact, even though completely degenerated, these models are hypoelliptic and still verify some properties of coercivity, in a broad sense of the word. Such models often appear in the fields of mechanics, finance and even biology. For such models we believe it appropriate to build statistical non-parametric estimation tools. Initial results have been obtained for the estimation of invariant density, in conditions guaranteeing its existence and unicity and when only partial observational data are available. A paper on the non parametric estimation of the drift has been accepted recently (see Samson et al., 2012, for results for parametric models). As far as the estimation of the diffusion term is concerned, a paper has been accepted , in collaboration with J.R. Leon (Caracas, Venezuela) and P. Cattiaux (Toulouse). Recursive estimators have been also proposed by the same authors in , also recently accepted. In a recent collaboration with Adeline Samson from the statistics department in the Lab, we considered adaptive estimation, that is we proposed a data-driven procedure for the choice of the bandwidth parameters. A paper has been submitted.

In , we focused on damping Hamiltonian systems under the so-called fluctuation-dissipation condition.

Note that Professor Jose R. Leon (Caracas, Venezuela) is now funded by an international Inria Chair, allowing to collaborate further on parameter estimation.

We recently proposed a paper on the use of the Euler scheme for inference purposes, considering reflected diffusions. This paper could be extended to the hypoelliptic framework.

Studying risks in a spatio-temporal context is a very broad field of research and one that lies at the heart of current concerns at a number of levels (hydrological risk, nuclear risk, financial risk etc.). Stochastic tools for risk analysis must be able to provide a means of determining both the intensity and probability of occurrence of damaging events such as e.g. extreme floods, earthquakes or avalanches. It is important to be able to develop effective methodologies to prevent natural hazards, including e.g. the construction of barrages.

Different risk measures have been proposed in the one-dimensional framework . The most classical ones are the return level (equivalent to the Value at Risk in finance), or the mean excess function (equivalent to the Conditional Tail Expectation CTE). However, most of the time there are multiple risk factors, whose dependence structure has to be taken into account when designing suitable risk estimators. Relatively recent regulation (such as Basel II for banks or Solvency II for insurance) has been a strong driver for the development of realistic spatio-temporal dependence models, as well as for the development of multivariate risk measurements that effectively account for these dependencies.

We refer to for a review of recent extensions of the notion of return level to the multivariate framework. In the context of environmental risk, proposed a generalization of the concept of return period in dimension greater than or equal to two. Michele et al. proposed in a recent study to take into account the duration and not only the intensity of an event for designing what they call the dynamic return period. However, few studies address the issues of statistical inference in the multivariate context. In , , we proposed non parametric estimators of a multivariate extension of the CTE. As might be expected, the properties of these estimators deteriorate when considering extreme risk levels. In collaboration with Elena Di Bernardino (CNAM, Paris), Clémentine Prieur is working on the extrapolation of the above results to extreme risk levels.

Elena Di Bernardino, Véronique Maume-Deschamps (Univ. Lyon 1) and Clémentine Prieur also derived an estimator for bivariate tail . The study of tail behavior is of great importance to assess risk.

With Anne-Catherine Favre (LTHE, Grenoble), Clémentine Prieur supervises the PhD thesis of Patricia Tencaliec. We are working on risk assessment, concerning flood data for the Durance drainage basin (France). The PhD thesis started in October 2013 and will be defended in next February. A first paper on data reconstruction has been accepted . It was a necessary step as the initial series contained many missing data. A second paper is in preparation, considering the modeling of precipitation amount with semi-parametric sparse mixtures.

At the present time the observation of Earth from space is done by more than thirty satellites. These platforms provide two kinds of observational information:

Eulerian information as radiance measurements: the radiative properties of the earth and its fluid envelops. These data can be plugged into numerical models by solving some inverse problems.

Lagrangian information: the movement of fronts and vortices give information on the dynamics of the fluid. Presently this information is scarcely used in meteorology by following small cumulus clouds and using them as Lagrangian tracers, but the selection of these clouds must be done by hand and the altitude of the selected clouds must be known. This is done by using the temperature of the top of the cloud.

MOISE was the leader of the ANR ADDISA project dedicated to the assimilation of images, and is a member of its follow-up GeoFluids (along with EPI FLUMINANCE and CLIME, and LMD, IFREMER and Météo-France) that ended in 2013.

During the ADDISA project we developed Direct Image Sequences Assimilation (DISA) and proposed a new scheme for the regularization of optical flow problems , which was recently extended . Thanks to the nonlinear brightness assumption, we proposed an algorithm to estimate the motion between two images, based on the minimization of a nonlinear cost function. We proved its efficiency and robustness on simulated and experimental geophysical flows. As part of the ANR project GeoFluids, we are investigating new ways to define distance between a couple of images. One idea is to compare the gradient of the images rather than the actual value of the pixels. This leads to promising results. Another idea, currently under investigation, consists in comparing main structures within each image. This can be done using, for example, a wavelet representation of images. Both approaches have been compared, in particular their relative merits in dealing with observation errors. This work has been extended to the progressive assimilation of different scales contained in the observations

In recent developments we have also used "Level Sets" methods to describe the evolution of the images. The advantage of this approach is that it permits, thanks to the level sets function, to consider the images as a state variable of the problem. We have derived an Optimality System including the level sets of the images. This approach is being applied to the tracking of oceanic oil spills

We investigate the use of optimal transport based distances for data assimilation, and in particular for assimilating dense data such as images. The PhD thesis of N. Feyeux studied the impact of using the Wasserstein distance in place of the classical Euclidean distance (pixel to pixel comparison). In a simplified one dimensional framework, we showed that the Wasserstein distance is indeed promising. Figure illustrates the advantage of using Wasserstein over

We are interested in the tracking of mesoscale convective systems.A particular region of interest is West Africa. Data and hydrological expertise is provided by T. Vischel and T. Lebel (LTHE, Grenoble).

A first approach involves adapting the multiple hypothesis tracking (MHT) model originally designed by the NCAR (National Centre for Atmospheric Research) for tracking storms to the data for West Africa. With A. Makris (working on a post-doctoral position), we proposed a Bayesian approach , which consists in considering that the state at time t is composed on one hand by the events (birth, death, splitting, merging) and on the other hand by the targets' attributes (positions, velocities, sizes, ... ). The model decomposes the state into two sub-states: the events and the targets positions/attributes. The events are updated first and are conditioned to the previous targets sub-state. Then given the new events the target substate is updated. A simulation study allowed to verify that this approach improves the frequentist approach by Storlie et al. (2009). It has been tested on simulations and investigated in the specific context of real data on West Africa . Using PHD (probability hypothesis density) filters adapted to our problem, generalizing recent developments in particle filtering for spatio-temporal branching processes (e.g. ) could be an interesting alternative to explore. The idea of a dynamic, stochastic tracking model should then provide the base for generating rainfall scenarios over a relatively vast area of West Africa in order to identify the main sources of variability in the monsoon phenomenon.

Given the complexity of modern urban areas, designing sustainable policies calls for more than sheer expert knowledge. This is especially true of transport or land use policies, because of the strong interplay between the land use and the transportation systems. Land use and transport integrated (LUTI) modelling offers invaluable analysis tools for planners working on transportation and urban projects. Yet, very few local authorities in charge of planning make use of these strategic models. The explanation lies first in the difficulty to calibrate these models, second in the lack of confidence in their results, which itself stems from the absence of any well-defined validation procedure. Our expertise in such matters will probably be valuable for improving the reliability of these models. To that purpose we participated to the building up of the ANR project CITiES led by the STEEP EPI. This project started early 2013 and two PhD about sensitivity analysis and calibration were launched late 2013. Laurent Gilquin defended his PhD in October 2016 and Thomas Capelle will defend his in February 2017.

On top of the development on calibration procedure and sensitivity analysis for LUTI models, a study was conducted to understand in what extend modelling is or could be more integrated into urban planning .

A 3-year contract with ARTELIA Group: funding for the PhD thesis of M.P. Daou (CIFRE)

A 3-year contract named ALBATROS with Mercator-Ocean on the topic « Interaction océan, vagues, atmosphère à haute résolution ».

A 1-year contract with NOVELTIS on the thematic "Développement de démonstrateurs avec AGRIF": see

A 1-year contract with IFREMER on the thematic "Evolution de la librairie de raffinement de maillage en Fortran (AGRIF) : amélioration de la prise en compte du trait de côte et des frontiéres ouvertes en contexte paralléle MPI/OpenMP" : see

The Chair OQUAIDO – for "Optimisation et QUAntification d'Incertitudes pour les Données Onéreuses" in French – is the chair in applied mathematics held at Mines Saint-Étienne (France). It aims at gathering academical and technological partners to work on problems involving costly-to-evaluate numerical simulators for uncertainty quantification, optimization and inverse problems. This Chair, created in January 2016, is the continuation of the projects DICE and ReDICE which respectively covered the periods 2006-2009 and 2011-2015.

N. Feyeux PhD is sponsored by the action ARC3 Environment of the Region Rhone-Alpes.

Clémentine Prieur obtained a 8kE two-years funding for a local project on risk by the Labex Persyval. Philippe Naveau (from LSCE, Paris) will visit the team during one month in this context.

A 3.5 year ANR contract: ANR CITiES (numerical models project selected in 2012). https://

A 4-year ANR contract: ANR TOMMI (Transport Optimal et Modèles Multiphysiques de l'Image), see paragraphs , .

A 5 year ANR contract (2011-2016): ANR COMODO (Communauté de Modélisation Océanographique) on the thematic "Numerical Methods in Ocean Modelling". (coordinator L. Debreu), see .

A 4-year contract : ANR HEAT (Highly Efficient ATmospheric modelling) http://

A. Vidard leads a group of projects gathering multiple partners in France and UK on the topic "Variational Data Assimilation for the NEMO/OPA9 Ocean Model", see .

C. Prieur chaired GdR MASCOT NUM 2010-2015, in which are also involved M. Nodet, E. Blayo, C. Helbert, E. Arnaud, L. Viry, S. Nanty, L. Gilquin. She is still strong involved in thie group (co-chair)
http://

C. Prieur is the leader of the LEFE/MANU project MULTIRISK (2014-2016) on multivariate risk analysis, which gathers experts from Lyon 1 University, CNAM, LSCE and Grenoble University mainly.

Type: COOPERATION

Instrument: Specific Targeted Research Project

Program: Collaborative project FP7-SPACE-2013-1

Project acronym: ERA-CLIM2

Project title: European Reanalysis of the Global Climate System

Duration: 01/2014 - 12/2016

Coordinator: Dick Dee (ECMWF, Europe)

Other partners: Met Office (UK), EUMETSAT (Europe), Univ Bern (CH), Univ. Vienne (AT), FFCUL (PT), RIHMI-WDC (RU), Mercator-Océan (FR), Météo-France (FR), DWD (DE), CERFACS (FR), CMCC (IT), FMI (FI), Univ. Pacifico (CL), Univ. Reading (UK), Univ. Versailles St Quentin en Yvelines (FR)

Inria contact: Arthur Vidard

Partner: European Centre for Medium Range Weather Forecast. Reading (UK)

World leading Numerical Weather Center, that include an ocean analysis section in order to provide ocean initial condition for the coupled ocean atmosphere forecast. They play a significant role in the NEMOVAR project in which we are also partner.

Partner: Met Office (U.K) National British Numerical Weather and Oceanographic service. Exceter (UK).

We do have a strong collaboration with their ocean initialization team through both our NEMO, NEMO-ASSIM and NEMOVAR activities. They also are our partner in the NEMOVAR consortium.

Partner: University of Reading, Department of Meteorology, Department of Mathematics

Subject: Data assimilation for geophysical systems.

F. Lemarié and L. Debreu collaborate with Hans Burchard and Knut Klingbeil from the Leibniz-Institut für Ostseeforschung in Warnemünde.

C. Prieur collaborates with Jose R. Leon (UCV, Central University of Caracas), who is funded by the international Inria chair program.

C. Prieur is collaborating with AC Favre (LTHE, Grenoble) in the framework of a two-years canadian funding from CFQCU (Conseil franco-québécois de coopération universitaire) 2015-2016.

** SIDRE**

Title: Statistical inference for dependent stochastic processes and application in renewable energy

International Partners (Institution - Laboratory - Researcher):

Universidad de Valparaiso (Chile) - Karine Bertin

Universidad Central de Venezuela (Venezuela) - Jose León

Duration: 2016 - 2017

Start year: 2016

C. Prieur is one of the two french coordinators of the MATH AmSud project SIDRE. We want to develop, apply and study the properties of statistical tools in several non-parametric models, segmentation models, time series and random fields models, and to study some classes of long-range dependent processes, for their possible application in renewable energies and other domains. In particular non-parametric statistical procedure in Markov switching non-linear autoregressive models, finite mixture, non-parametric functional test and non-parametric estimators in stochastic damping Hamiltonian systems will be considered. Statistical tools for segmenting dependent multiples series, censoring processes in time series models and a new model interpolation scheme will be studied.

F.-X. Le Dimet has been invited for 2 weeks in October 2016 at Florida State University. He has delivered a seminar.

F.-X. Le Dimet has been invited for 3 weeks at the Harbin Institute of Technology in June 2016 to work with Ma Jianwei and Long Li. He delivered 2 seminars and 2 courses on data assimilation.

F.-X. Le Dimet has been invited for a week at Universidad Complutense in Maddrid in November 2016 to lecture (8 hours) on Variational Data Assimilation. A collaboration has been started on oil pollution on the ocean. The project will include the developments of Assimilation of Images and data assimilation for pollution carried out at the Institute of Mechanics in Hanoi.

E. Blayo, A. Vidard and E. Cosme organized the 6th French national symposium on data assimilation (Grenoble, November 30 - December 2, 2016).

L. Debreu has organized the international workshop "DRAKKAR" on global ocean modelling with the NEMO system (January 2016).

F. Lemarié was the convener of a session « Recent Developments in Numerical Earth System Modeling » during the 2016 European
Geosciences Union General Assembly in Vienna (http://

C.Prieur was a member of the organizing committee of the Journées MAS 2016, Grenoble http://

L. Debreu was a member of the program committee of the International Conference on Computational Science (ICCS 2016), Paris, December 2016.

C.Prieur is a member of the program committee of the International Conference on Sensitivity Analysis of Model Output (SAMO 2016), La Réunion http://

E. Blayo: reviewer for Mathematics and Computers in Simulation, Ocean Modelling.

L. Debreu: reviewer for Ocean Modelling, Ocean Dynamics, Geophysical Model Development.

F. Lemarié: reviewer for Ocean Modeling, Dynamics of Atmospheres and Oceans, Geoscientific Model Development, SIAM Journal on Scientific Computing

E. Blayo is the chair of the CNRS-INSU research program LEFE-MANU on mathematical and numerical methods for ocean and atmosphere http://

L. Debreu is the coordinator of the national group COMODO (Numerical Models in Oceanography).

C. Prieur chairs GdR MASCOT NUM, in which are also involved M. Nodet, E. Blayo, C. Helbert, E. Arnaud, L. Viry, S. Nanty, L. Gilquin. http://

F. Lemarié is a member of the CROCO (https://

E. Blayo is a deputy director of the Jean Kuntzmann Lab.

L. Debreu is a member of the scientific evaluation committee of the French Research Institute for Development (IRD).

E.Arnaud is a member of the executive committee of IXXI (complex system institute) http://

C.Prieur is an elected member of the National Council of Universities (CNU).

C.Prieur is a member of the Scientific Council of the Mathematical Society of France (SMF).

C.Prieur is a member of the Committee of Statistical Mathematics Group of the French Statistical Society (SFdS).

Licence: E. Blayo, Mathématiques pour l'ingénieur, 52h, L1, University of Grenoble

License : E. Arnaud, Mathématiques pour l’ingénieur, 57h, L1, University Grenoble Alpes, France.

License : E. Arnaud, Diplôme d'accès à la license, 15h, L1, University Grenoble Alpes, France.

Licence : M. Nodet, outils mathématiques pour l'ingénieur, 100h, L1, Univ. Grenoble Alpes, France

Master : E. Arnaud, Tutorat d'apprentis MIAGE, 28h, M2, University Grenoble Alpes, France.

Master : E. Arnaud, Projet de programmation en traitement d'images, 16h, M1, University Grenoble Alpes, France.

Master : E. Arnaud, Computer Vision, 9h, M2, University Grenoble Alpes, France.

Master : M. Nodet, partial differential equation, 20h, M1, Univ. Grenoble Alpes, France.

Master : M. Nodet, inverse methods and data assimilation, 30h, M2, Univ. Grenoble Alpes, France.

Master: E. Blayo, Méthode des éléments finis, 47h, M1, University of Grenoble.

Master: E. Blayo, Partial Differential Equations and numerical methods, 43h, M1, ENSIMAG and University of Grenoble.

Doctorat: E. Blayo, M. Nodet, A. Vidard, Introduction to data assimilation, 20h, University of Grenoble.

Doctorat : L. Debreu, Formation doctorale nationale Modélisation numérique de l'océan et de l'atmosphère, 21-25 novembre 2016, Paris, France. With T. Dubos (LMD/Ecole Polytechnique, Paris), G. Roullet (Brest University), F. Hourdin (LMD/CNRS, Paris).

**E-learning**

SPOC : E. Arnaud, M. Nodet, E. Blayo, A. Vidard , 10 weeks,
moodle platform http://

Pedagogical resources : all documents for problem-based learning including videos http://

PhD : Nelson Feyeux, Application du transport optimal pour l'assimilation de données images, December 2016, A. Vidard, M. Nodet.

PhD : Mehdi-Pierre Daou, Développement d'une méthodologie de couplage multimodèles avec changements de dimension - Validation sur un cas-test réaliste, Université Grenoble Alpes, 27 septembre 2016, E. Blayo & A. Rousseau, see .

PhD : Laurent Gilquin, Echantillonage Monte Carlo et quasi-Monte Carlo pour l'estimation des indices de Sobol'. Application à un modèle transport-urbanisme, Université Grenoble Alpes, 17 octobre 2016, C. Prieur and E. Arnaud, see .

PhD in progress : Thomas Capelle, Calibration of LUTI models, octobre 2013, P. Sturm (EPI STEEP), A.Vidard.

PhD in progress : Rémi Pellrej, Assimilation de données pour les modèles couplés, octobre 2014, A. Vidard, F. Lemarié.

PhD in progress: Charles Pelletier, Etude mathématique et numérique de la formulation du couplage océan-atmosphère dans les modèles de climat. December 2014, E. Blayo, F. Lemarié and P. Braconnot.

PhD in progress : Patricia Tencaliec, Approches stochastiques pour la gestion des risques environnementaux extrêmes, October 2013, Clémentine Prieur, Anne-Catherine Favre (LTHE).

PhD in progress : Reda El Amri, Analyse d’incertitudes et de robustesse pour les modèles à entrées et sorties fonctionnelles, April 2016, Clémentine Prieur, Céline Helbert (Centrale Lyon), funded by IFPEN, int the OQUAIDO chair program.

Internship : Damien Garino, Suivi de formes non rigides dans les images, M2, University Grenoble Alpes, 6 months, E. Arnaud and A. Vidard.

E. Blayo:

1 July 2016 - HDR thesis of Yann Michel, University of Toulouse (referee),

5 July 2016 - PhD thesis of François Mercier, University of Versailles-Saint Quentin (examiner),

9 November 2016 - PhD thesis of Vladimir Groza, University of Nice (referee),

5 December 2016 - PhD thesis of Cyrille Mosbeux, University of Grenoble (president).

L. Debreu – PhD thesis of Charles Colavolpe, University of Toulouse (referee),

L. Debreu – PhD thesis of Amandine Declerck, University of Toulon (referee),

A. Vidard – PhD thesis of Rachida El Ouaranis, Institut national polytechnique de Toulouse / Université Hassan 2 de Casablanca (referee),

F.-X. Le Dimet – HDR thesis of Philippe Moireau, Paris-Saclay,

F. Lemarié: 30 mars 2016 - PhD thesis of Véra Oerder, Université Pierre et Marie Curie, Paris 6 (examiner),

E. Anaud: juries of M2 thesis and M2 Miage apprentices,

Clémentine Prieur took part to recruiting CR2 for Inria Grenoble Alpes (2014,2015,2016).

E. Blayo gave several outreach talks, in particular for middle school and high school students, and for more general audiences.

Ch. Kazantsev and E. Blayo participate in the creation of "La Grange des maths" in Varces (south of Grenoble). See http://

Since 2010, Ch. Kazantsev is the Director of the IREM of Grenoble http://

M. Nodet and E. Arnaud co-organises a year-round weekly math club in two secondary schools, where pupils research open mathematical problems.

M. Nodet is a member of "les Emulateurs", a group of Grenoble Univ. professors meeting once a month around the subjects of innovative pedagogy and its applications to universities.

M. Nodet takes part in a maths club "Math en Jeans" involving two secondary schools around Grenoble.

M. Nodet takes part in training modelling in mathematics to secondary school maths teachers through the regional "Maison pour la Science" and is also in charge of an IREM group about building interdisciplinary projects for secondary school classes.

Podcast Interstices Clémentine Prieur’s interview https://