The MOISE project-team, LJK-IMAG laboratory (UMR 5224), is a joint project between CNRS, INRIA, Institut Polytechnique de Grenoble (Grenoble INP), Joseph Fourier University (UJF) and Pierre-Mendès-France University (UPMF). Team leader is Éric Blayo.

This project-team is located in the LJK laboratory.

MOISE is a research project-team in applied mathematics and scientific computing, focusing on the development of
**mathematical and numerical methods for direct and inverse modelling in environmental applications**(mainly geophysical fluids). The scientific backdrop of this project-team is the
**design of complex forecasting systems**, our overall applicative aim being to contribute to the improvement of such systems, especially those related to natural hazards: climate change,
regional forecasting systems for the ocean and atmosphere, decision tools for floods, snow avalanches, mud or lava flows...

A number of specific features are shared by these different applications: interaction of different scales, multi-component aspects, necessity of combining heterogeneous sources of information (models, measurements, images), uniqueness of each event. The development of efficient methods therefore requires to take these features into account, a goal which covers several aspects, namely:

Mathematical and numerical modelling

Data assimilation (deterministic and stochastic approaches)

Quantification of forecast uncertainties

Pluridisciplinarity is a key aspect of the project-team. The part of our work more related to applications is therefore being conducted in close collaboration with specialists from the different fields involved (geophysicists, etc).

Geophysical flows generally have a number of particularities that make it difficult to model them and that justify the development of specifically adapted mathematical and numerical methods:

Geophysical flows are non-linear. There is often a strong interaction between the different scales of the flows, and small-scale effects (smaller than mesh size) have to be modeled in the equations.

Every geophysical episode is unique: a field experiment cannot be reproduced. Therefore the validation of a model has to be carried out in several different situations, and the role of the data in this process is crucial.

Geophysical fluids are non closed systems, i.e. there is always interactions between the different components of the environment (atmosphere, ocean, continental water,etc.). Boundary terms are thus of prime importance.

Geophysical flows are often modeled with the goal of providing forecasts. This has several consequences, like the usefulness of providing corresponding error bars or the importance of designing efficient numerical algorithms to perform computations in a limited time.

Given these particularities, the overall objectives of the MOISE project-team described earlier will be addressed mainly by using the mathematical tools presented in the following.

**Models**allow a global view of the dynamics, consistent in time and space on a wide spectrum of scales. They are based on fluid mechanics equations and are complex since they deal with the
irregular shape of domains, and include a number of specific parameterizations (for example, to account for small-scale turbulence, boundary layers, or rheological effects). Another fundamental
aspect of geophysical flows is the importance of non-linearities, i.e. the strong interactions between spatial and temporal scales, and the associated cascade of energy, which of course makes
their modelling more complicated.

Since the behavior of a geophysical fluid generally depends on its interactions with others (e.g. interactions between ocean, continental water, atmosphere and ice for climate modelling),
building a forecasting system often requires
**coupling different models**. Several kinds of problems can be encountered, since the models to be coupled may differ in numerous respects: time and space resolution, physics, dimensions.
Depending on the problem, different types of methods can be used, which are mainly based on open and absorbing boundary conditions, multi-grid theory, domain decomposition methods, and optimal
control methods.

Despite their permanent improvement, models are always characterized by an imperfect physics and some poorly known parameters (e.g. initial and boundary conditions). This is why it is
important to also have
**observations**of natural systems. However, observations provide only a partial (and sometimes very indirect) view of reality, localized in time and space.

Since models and observations taken separately do not allow for a deterministic reconstruction of real geophysical flows, it is necessary to use these heterogeneous but complementary sources
of information simultaneously, by using
**data assimilation methods**. These tools for
**inverse modelling**are based on the mathematical theories of optimal control and stochastic filtering. Their aim is to identify system parameters which are poorly known in order to
correct, in an optimal manner, the model trajectory, bringing it closer to the available observations.

**Variational methods**are based on the minimization of a function measuring the discrepancy between a model solution and observations, using optimal control techniques for this purpose. The
model inputs are then used as control variables. The Euler Lagrange condition for optimality is satisfied by the solution of the “Optimality System" (OS) that contains the adjoint model
obtained by derivation and transposition of the direct model. It is important to point out that this OS contains all the available information: model, data and statistics. The OS can therefore
be considered as a generalized model. The adjoint model is a very powerful tool which can also be used for other applications, such as sensitivity studies.

**Stochastic filtering**is the basic tool in the sequential approach to the problem of data assimilation into numerical models, especially in meteorology and oceanography. The (unknown)
initial state of the system can be conveniently modeled by a random vector, and the error of the dynamical model can be taken into account by introducing a random noise term. The goal of
filtering is to obtain a good approximation of the conditional expectation of the system state (and of its error covariance matrix) given the observed data. These data appear as the
realizations of a random process related to the system state and contaminated by an observation noise.

The development of data assimilation methods in the context of geophysical fluids, however, is difficult for several reasons:

the models are often strongly non-linear, whereas the theories result in optimal solutions only in the context of linear systems;

the model error statistics are generally poorly known;

the size of the model state variable is often quite large, which requires dealing with huge covariance matrices and working with very large control spaces;

data assimilation methods generally increase the computational costs of the models by one or two orders of magnitude.

Such methods are now used operationally (after 15 years of research) in the main meteorological and oceanographic centers, but tremendous development is still needed to improve the quality of the identification, to reduce their cost, and to make them available for other types of applications.

A challenge of particular interest consists in developing methods for assimilating image data. Indeed, images and sequences of images represent a large amount of data which are currently underused in numerical forecast systems. For example, precursors of extreme meteorological events like thunderstorms, for which an early forecast is required, are visible on satellite images.

However, despite their huge informative potential, images are only used in a qualitative way by forecasters, mainly because of the lack of an appropriate methodological framework. In order to extend data assimilation techniques to image data we need to be able to:

identify and extract from images dynamics the relevant information (for instance structures) about the model state variables evolution;

link images dynamics with the underlying physical evolution processes;

define functional spaces for images which have good topological properties;

build observation operators which permit to map the model state variables space onto the aforementioned image space.

The use of images dynamics in numerical forecast systems is not restricted to meteorological or oceanographic applications: other scientific disciplines like hydrology (spatial observation of the main river bed during a flood), glaciology (radar exploration of polar ices, ice cover), medicine, etc. are interested in the development of such techniques.

Due to the strong non-linearity of geophysical systems and to their chaotic behavior, the dependence of their solutions on external parameters is very complex. Understanding the relationship between model parameters and model solutions is a prerequisite to design better models as well as better parameter identification. Moreover, given the present strong development of forecast systems in geophysics, the ability to provide an estimate of the uncertainty of the forecast is of course a major issue. However, the systems under consideration are very complex, and providing such an estimation is very challenging. Several mathematical approaches are possible to address these issues, using either variational or stochastic tools.

**Variational approach.**In the variational framework, the sensitivity is the gradient of a response function with respect to the parameters or the inputs of the model. The adjoint
techniques can therefore be used for such a purpose. If sensitivity is sought in the context of a forecasting system assimilating observations, the optimality system must be derived. This leads
to the study of second-order properties: spectrum and eigenvectors of the Hessian are important information on system behavior.

**Global stochastic approach.**Using the variational approach to sensitivity leads to efficient computations of complex code derivatives. However, this approach to sensitivity remains local
because derivatives are generally computed at specific points. The stochastic approach of uncertainty analysis aims at studying global criteria based on a joint probability distribution
functions modelling of the problem variables. The obtained sensitivity indices describe the global variabilities of the phenomena. For example, the Sobol sensitivity index is given by the ratio
between the output variance conditionally to one input and the total output variance. The computation of such quantities leads to statistical problems. For example, the sensitivity indices have
to be efficiently estimated from a few runs, using semi or non-parametric estimation techniques. The stochastic modelling of the input/output relationship is another solution.

The evolution of natural systems, in the short, mid, or long term, has extremely important consequences for both the global earth system and humanity. Forecasting this evolution is thus a major challenge from the scientific, economic, and human viewpoints.

Humanity has to face the problem of
**global warming**, brought on by the emission of greenhouse gases from human activities. This warming will probably cause huge changes at global and regional scales, in terms of climate,
vegetation and biodiversity, with major consequences for local populations. Research has therefore been conducted over the past 15 to 20 years in an effort to model the earth's climate and
forecast its evolution in the 21st century in response to anthropic action.

With regard to short-term forecasts, the best and oldest example is of course
**weather forecasting**. Meteorological services have been providing daily short-term forecasts for several decades which are of crucial importance for numerous human activities.

Numerous other problems can also be mentioned, like
**seasonal weather forecasting**(to enable powerful phenomena like an El Ni
o event or a drought period to be anticipated a few months in advance),
**operational oceanography**(short-term forecasts of the evolution of the ocean system to provide services for the fishing industry, ship routing, defense, or the fight against marine
pollution),
**air pollution**prediction systems, the prediction of
**floods**, or the simulation of
**mud flows**and
**snow avalanches**for impact studies and regional planning.

As mentioned previously, mathematical and numerical tools are omnipresent and play a fundamental role in these areas of research. In this context, the vocation of MOISE is not to carry out numerical prediction, but to address mathematical issues raised by the development of prediction systems for these application fields, in close collaboration with geophysicists.

Understanding and forecasting the ocean circulation is currently the subject of an intensive research effort by the international scientific community. This effort was primarily motivated by the crucial role of the ocean in determining the earth's climate, particularly from the perspective of global change. In addition, important recent research programs are aimed at developing operational oceanography, i.e. near real-time forecasting of ocean circulation, with applications for ship routing, fisheries, weather forecasting, etc. Another related field is coastal oceanography, dealing for example with pollution, littoral planning, or the ecosystems management. Local and regional agencies are currently very interested in numerical modelling systems for coastal areas.

Both ocean-alone models and coupled ocean-atmosphere models are being developed to address these issues. In this context, the MOISE project-team conducts efforts mainly on the following topics:

*Multi-resolution approaches and coupling methods*: Many applications in coastal and operational oceanography require high resolution local models. These models can either be forced at
their boundaries by some known data, or be dynamically coupled with a large-scale coarser resolution model. Such model interactions require specific mathematical studies on open boundary
conditions, refinement methods (like mesh refinement or stochastic downscaling), and coupling algorithms. The latter have also to be studied in the context of ocean-atmosphere coupled
systems.

*Advanced numerical schemes*: Most ocean models use simple finite difference schemes on structured grids. We are seeking for better schemes allowing both accuracy and good conservation
properties, and dealing with irregular boundaries and bottom topography.

*Data assimilation methods for ocean modelling systems*: The main difficulties encountered when assimilating data in ocean or atmosphere models are the huge dimension of the model
state vector (typically
10
^{6}-
10
^{7}), the strongly nonlinear character of the dynamics, and our poor knowledge of model error statistics. In this context, we are developing reduced order sequential and
variational data assimilation methods addressing the aforementioned difficulties. We are also working on the assimilation of lagrangian data, of sequences of images, and on the design of
data assimilation methods for multi-resolution models and for coupled systems.

Most of these studies are led in strong interaction with geophysicists, in particular from the Laboratoire des Ecoulements Géophysiques et Industriels (LEGI, Grenoble).

The study of past climate is a means of understanding climatic mechanisms. Drillings in polar ice sheets provide a huge amount of information on paleoclimates: correlation between greenhouse gases and climate, fast climatic variability during the last ice age, etc. However, in order to improve the quantitative use of the data from this archive, numerous questions remain to be answered because of phenomena occurring during and after the deposition of snow. An important research aim is therefore to optimally model ice sheets in the vicinity of drilling sites in order to improve their interpretation: age scale for the ice and for the gas bubbles, mechanical thinning, initial surface temperature and accumulation when snow is deposited, spatial origin of ice from the drilling.

In other respect, ice streams represent an important feature of ice flows since they account for most of the ice leaving the ice sheet (in Antarctic, one estimates that ice streams evacuate more than 70% of the ice mass in less than 10% of the coast line). Furthermore, recent observations showed that some important ice streams are presently accelerating. Thus, we seek to improve models of ice sheets, by developing data assimilation approaches in order to calibrate them using available observations.

Another objective is the evaluation of the state of the polar ice caps in the past, and their interactions with the other components of the earth climate, in order to forecast their evolution in the forthcoming centuries. The joint use of models and data, through data assimilation techniques, to improve system description is relatively new for the glaciological community. Therefore inverse methods have to be developed or adapted for this particular purpose.

By gaining and loosing mass, glaciers and ice-sheets are playing a key role in the sea level evolution. This is obvious when regarding past as, for example, collapse of the large northern hemisphere ice-sheets after the Last Glacial Maximum has contributed to an increase of 120 m of sea level. This is particularly worrying when the future is considered. Indeed, recent observations clearly indicates that important changes in the velocity structure of both Antarctic and Greenland ice-sheets are occurring, suggesting that large and irreversible changes may have been initiated. This has been clearly emphasized in the last report published by the Intergovernmental Panel on Climate Change (IPCC). IPCC has further insisted on the poor current knowledge of the key processes at the root of the observed accelerations and finally concluded that reliable projections of sea-level rise are currently unavailable. The general aim of this project is to develop data assimilation methods related to ice flow modelling purpose, in order to provide accurate and reliable estimation of the future contribution of ice-sheets to SLR.

Development of ice flow adjoint models is by itself a scientific challenge This new step forward is clearly motivated by the amount of data now available at both the local and the large scales.

Water resources and floods are critical issues. They are the result of complex interactions within the water cycle between meteorology, hydrology and hydraulics. Mathematical and numerical modelling is becoming accepted as a standard engineering practice for prevention and prediction.

Concerning river hydraulics, forward models based on 1-D and 2-D shallow water equations and the corresponding industrial softwares (e.g. Telemac2D, Carima1D) are satisfying for many situations. Nevertheless for real applications, initial and boundary conditions (basically, water level and discharge) are very partially measured hence difficult to prescribe. Empirical parameters (e.g. land roughness) are calibrated manually with difficulties. Also, coupling between 1D net-model and local 2D configurations is a priori not feasible using the standard computational softwares.

Concerning soil infiltration and rainfall-runoff phenomena, on one hand forward models have still to be improved (e.g. 3D Richards' equations), and on the other hand, empirical parameters are numerous and very difficult to prescribe.

Realistic and reliable numerical prediction requires an integrated approach with all components (different models coupled together and corresponding measured data), with affordable computational cost. Sensitivity analysis and data assimilation methods, that have shown their potential in other geosciences like meteorology and oceanography, are now in the forefront in hydrology: , . This prediction chain is far from being operational in hydrology and river hydraulics.

The problems addressed in MOISE are related to the coupling/superposition of models, more efficient forward solvers, sensitivity analysis and data assimilation for catchment scale hydrology and/or river hydraulics.

The computation of the wind at small scale and the estimation of its uncertainties is of particular importance for applications such as wind energy resource estimation. To
this aim, we develop a new method based on the combination of an existing numerical weather prediction model providing a coarse prediction, and a Lagrangian Stochastic Model adapted from a pdf
method introduced by S.B. Pope for turbulent flows. This Stochastic Downscaling Method (
SDM

AGRIF is licensed under a GNU (GPL) license and can be downloaded at its web site (
http://

MSFF, a Multi-Scale Open Source CFD Code, based on AGRIF, developed in the context of the COMMA project has been released (
http://

Recently, A. Rousseau and C. Lucas have performed some theoretical and numerical studies around the derivation of viscous Shallow Water equations. They proved that it is sometimes necessary to take into account the cosine part of the Coriolis force (which is usually neglected, leading to the so-called Traditional Approximation).

After a first paper published in 2008 , they presented their new results in an international conference on mathematical oceanography, . They now pursue the work with M. Petcu (Poitiers University) in order to investigate the influence of the traditional approximation in more complicated models such as the viscous Primitive Equations of the ocean.

The implementation of high-resolution local models can be performed in several ways. An usual way consists in designing a local model, and in using some external data to force it at its open boundaries. These data can be either climatological or issued from previous simulations of a large-scale coarser resolution model. The main difficulty in that case is to specify relevant open boundary conditions (OBCs).

In collaboration with V. Martin (LAMFA Amiens), we have started a work on the analysis of the impact of the imperfection of the external data on the design of efficient OBCs.

In order to avoid a modal decomposition in the vertical direction for the inviscid primitive equations, a complete
x-
zfinite volume discretization has been proposed in collaboration with R. Temam, with a special treatment of the boundaries in order to match the required boundary
conditions underlined in
. This work is under progress.

Other physical situations require coupling two models with not only different resolutions, but also different physics. Such a coupling can be studied within the framework of
global-in-time Schwarz methods. However, the efficiency of these iterative algorithms is strongly dependent on interface conditions. As a first step towards coupling a regional scale
primitive equations ocean model with a local Navier-Stokes model, we started a study on the derivation of interface conditions for 2-D
x-
zNavier-Stokes (D. Cherel's PhD thesis). It has been shown that several usual conditions lead to divergent algorithms, and that a convergent algorithm is obtained
when using transmission conditions obtained by a variational calculation.

Many applications in regional oceanography and meteorology require high resolution regional models with accurate air-sea fluxes. Separate integrations of oceanic and atmospheric model components in forced mode (i.e. without any feedback from one component to the other) may be satisfactory for numerous applications. However, two-way coupling is required for analyzing energetic and complex phenomena (e.g. tropical cyclones, climate studies, ... ). In this case, connecting the two model solutions at the air-sea interface is a difficult task, which is often addressed in a simplified way from a mathematical point of view. In this context, domain decomposition methods provide flexible and efficient tools for coupling models with non-conforming time and space discretizations.

F. Lemarié, in his PhD thesis (2008), addressed the application of such methods to this ocean-atmosphere coupling problem. In the continuity of this work, we have improved some results on the convergence of the Schwarz coupling method for two non-stationary 1-D diffusion equations with different coefficients.

These works are partially supported by the ANR (COMMA project).

Reducing the traditional errors in terrain-following vertical coordinate ocean models (or sigma models) has been a focus of interest for the last two decades. The objective is to use this class of model in regional domains which include not only the continental shelf, but the slope and deep ocean as well. Two general types of error have been identified: 1) the pressure-gradient error and 2) spurious diapycnal diffusion associated with steepness of the vertical coordinate. In a recent paper , we have studied the problem of diapycnal mixing. The solution to this problem requires a specifically designed advection scheme. We propose and validate a new scheme, where diffusion is split from advection and is represented by a rotated biharmonic diffusion scheme with flow-dependent hyperdiffusivity satisfying the Peclet constraint.

Work in this area will continue and is also supported by a contract with IFREMER.

The back and forth nudging algorithm (see , ), has been recently introduced for simplicity reasons, as it does not require any linearization, or adjoint equation, or minimization process in comparison with variational schemes, but nevertheless it provides a new estimation of the initial condition at each iteration.

We have studied its convergence properties as well as its efficiency in the case of numerical experiment with a 2D shallow water model. Comparisons with the 4D-VAR have been performed. Finally, we also studied a hybrid method, by considering a few iterations of the BFN algorithm as a preprocessing tool for the 4D-VAR algorithm. We have shown that the BFN algorithm is extremely powerful in the very first iterations, and also that the hybrid method can both improve notably the quality of the identified initial condition by the 4D-VAR scheme and reduce the number of iterations needed to achieve convergence , .

We considered from a theoretical point of view the case of 1-dimensional transport equations, either viscous or inviscid, linear or not (Bürgers' equation). We showed that for non viscous equations (both linear transport and Bürgers), the convergence of the algorithm holds under observability conditions. Convergence can also be proven for viscous linear transport equations under some strong hypothesis, but not for viscous Bürgers' equation. Moreover, the convergence rate is always exponential in time. We also notice that the forward and backward system of equations is well posed when no nudging term is considered . Several comparisons with the 4D-VAR and quasi-inverse algorithms have also been performed on this equation. The application of the BFN algorithm to OPA-NEMO ocean model is currently under investigation. The first experiments are very encouraging.

Finally, within the standard nudging framework, we considered the definition of an innovation term that takes into account the measurements and respects the symmetries of the physical model. We proved the convergence of the estimation error to zero on a linear approximation of the system (a 2D shallow-water model). It boils down to estimating the fluid velocity in a water-tank system using only SSH measurements. The observer is very robust to noise and easy to tune. The general nonlinear case has been illustrated by numerical experiments, and the results have been compared with the standard nudging techniques.

One of the main limitation of the current operational variational data assimilation techniques is that they assume the model to be perfect mainly because of computing cost issues. Numerous researches have been carried out to reduce the cost of controlling model errors by controlling the correction term only in certain privileged directions or by controlling only the systematic and time correlated part of the error.

Both the above methods consider the model errors as a forcing term in the model equations. Trémolet (2006) describes another approach where the full state vector (4D field : 3D spatial + time) is controlled. Because of computing cost one cannot obviously control the model state at each time step. Therefore, the assimilation window is split into sub-windows, and only the initial conditions of each sub-window are controlled, the junctions between each sub-window being penalized. One interesting property is that, in this case, the computation of the gradients, for the different sub-windows, are independent and therefore can be done in parallel.

We are implementing this method in a realistic Oceanic framework using OPAVAR/ NEMOVAR as part of the VODA ANR project.

The objectives are to study the mathematical formulation of variational data assimilation for locally nested models and to conduct numerical experiments for validation.

The state equations of the optimality system have been written for the general case of two embedded grids, for which several kinds of control (initial conditions, boundary conditions) have been proposed. Both one way and two way interactions have been studied. This last year, we worked on integration of non linear grid interactions in the algorithm. Additionally, the problem of specification of background error covariances matrices has been studied (see ).

In the ANR MSDAG project and Emilie Neveu's PhD, we continue to work on the subject. Our main interest is on the use of multiscale optimization methods for data assimilation. The idea is to apply a multigrid algorithm to the solution of the optimization problem. One of the focus is on the design of resolution dependent observation operators. The other key point is in the adaptation of the original FAS (Full Approximation Scheme) multigrid algorithm to local mesh refinement. The applications will be done in the context of the assimilation of images (see ).

In this section, we focused our attention on the data assimilation techniques devoted to identification of external model's parameters and parametrizations.

First, the attention was paid to control the bottom topography by variational data assimilation in frames of time-dependent motion governed by non-linear barotropic ocean model . Assimilation of artificially generated data allows to measure the influence of various error sources and to classify the impact of noise that is present in observational data and model parameters. The choice of assimilation window is discussed. Assimilating noisy data with longer windows provides higher accuracy of identified topography. The topography identified once by data assimilation can be successfully used for other model runs that start from other initial conditions and are situated in other parts of the model's attractor.

Second, the numerical scheme of the shallow-water model at the boundary was controlled by assimilating data of a high resolution model. It was shown in that control of approximation of boundary derivatives and interpolations can increase the model's accuracy in boundary regions and improve the solution in general. On the other hand, optimal boundary schemes obtained in this way, may not approximate derivatives in a common sense. Particular study was performed in on the example of one-dimensional wave equation to understand this phenomenon. In this simple case, we see that the control changes the interval's length (and, consequently, all approximations of derivatives) in order to compensate numerical error in the wave velocity.

To illustrate advantages of the optimal parametrization of model's operators in the boundary region, we perform several classical experiments with a linearized shallow-water model in a square box. We run the reference model on a fine resolution grid and assimilate the data of this run into a coarse resolution model controlling it's boundary.

The first one concerns spurious oscillations that appear in numerical solution when the Munk boundary layer is not resolved by the model's grid. The Munk layer's width in this experiment was taken as 60 km. One can see in figure (left) that strong oscillations are present in the solution on the model on 133 km grid while they are absent both in the finer grid solutions (45 and 15 km) and in the solution with the optimal boundary.

The second experiment is devoted to the study of inertia-gravity and Rossby waves simulated by the same model. We measure the difference between the reference solution, obtained on the
high-resolution grid (
h= 15
km) and solutions on a coarser grids. One can see in the figure
(right) that at the coarse (133 km) resolution the difference reaches as high
values as 1000. At the medium resolution (45 km), the difference is smaller, but with increasing tendency. When the parametrization of the model's boundary is optimal, the solution is close
to the reference one with no increasing tendency even at a low resolution.

A new version of the French ocean model OPA (Ocean PArallèle), ocean component of the NEMO (Nucleus for European Modelling of the Ocean) framework was released in 2005. For the previous version of the OPA model (8.2) a variational data assimilation system, OPAVAR, was developed mainly by A. Weaver at CERFACS. However the OPA 9 model has been completely rewritten in Fortran 90 and the code structure is significantly different from the previous versions, making it quite difficult to update OPAVAR .

Since a large community is interested in variational data assimilation with OPA9, we built a working group (coordinated by A. Vidard) in order to bring together various OPAVAR user-groups with diverse scientific interests (ranging from singular vector and sensitivity studies to specific issues in variational assimilation), and to get technical and scientific support from Inria Sophia (Automatic adjoint derivation, TROPICS project-team) and ECMWF (Parallelization). This project aimed at avoiding duplication of effort, and at developing a common NEMOVAR platform. It has led to the creation of the VODA (Variational Ocean Data Assimilation for multi scales applications) ANR project

The project is now well advanced, both 3D and 4D variational methods are implemented, a fully parallel version of NEMOTAM (Tangent and Adjoint Model for NEMO) is now available and will soon be included in the NEMO standard version.

As part of VODA, we are also working toward a NEMO-ASSIM framework in collaboration with LEGI, by including other kind of data assimilation methods (e.g. SEEK filter) in the NEMOVAR framework.

apart from the VODA ANR project, the NEMOVAR working group gets additional financial support by LEFE-Assimilation and the Mercator National Programs.

This work is motivated by the Argo program, which aims at deploying a network of 3000 profiling floats over the world ocean. These profilers drift at a typical depth of 1500m, and perform a vertical profile of temperature and salinity measurements every ten days. Their position is known every ten days, which gives a set of lagrangian data. We have developed a variational method in order to assimilate such data, a paper presenting the grounding theory of this method has been published in . Twin experiments were performed within the OPAVAR model, in an idealized configuration.

The next step is to implement this method into the state-of-the-art realistic ocean model NEMO and its 4D-Var extension NEMOVAR, as presented in .

This project is currently under development and is granted by ANR-VODA.

At the present time the observation of Earth from space is done by more than thirty satellites. These platforms provide information two kinds of observations:

Eulerian information as radiance measurements: the radiative properties of the earth and it's fluid envelopps. These data can be plugged into numerical models by solving some inverse problems.

Lagrangian information: the movement of fronts and vortices give information on the dynamics of the fluid. Presently this information is scarcely used in meteorology by following small cumulus clouds and using then as Lagrangian tracers, but the selection of these clouds must be done by hand and the altitude of the selected clouds must be known, this is done by using the temperature of the top of the cloud.

MOISE is the leader of the ADDISA project selected and funded by Agence Nationale de la Recherche dedicated to the assimilation of images. The member of the ADDISA group are the INRIA project CLIME, Laboratoire des Ecoulements Géophysique et Industriels (CNRS,Grenoble), Institut de Mathématiques de Toulouse and MétéoFrance The principle is to link images and numerical models in order to retrieve the initial condition at best. Two basic techniques are tested:

from images deduce pseudo-observation as the velocity of the flow, then assimilate these data as pseudo observations using a regular variational data assimilation scheme.

Consider “objects" in the images (fronts, vortices) then compare with the same objects created by the model and inject them into a scheme of assimilation which take them into account.

The method is already used by Météo France to detect precursors of severe storms.

ADDISA is a project coordinated by F.-X. Le Dimet and supported for three years (2007-2009) by the ANR :
http://

F.-X. Le Dimet, A. Vidard, O. Titaud and I. Souopgui implemented a shallow-water model coupled with an advection-diffusion model which simulates the drift of a vortex submitted to the Coriolis force. This simulation corresponds to some experiments performed by LEGI with the Coriolis platform (see figures and ) where the motion of the vortex is focused by a passive tracer (cf. ). D. Auroux works on the extraction of velocity fields from sequences of images, providing pseudo-observations of the fluid velocity .

In cooperation with the Institute of Oceanography ( G. Korotiaev Ukrainian Academy of Sciences) and CLIME ( I. Herlin, E. Huot, Rocquencourt).

From the observation of sea surface imagery, the surface current velocity, at the mesoscale level, is extracted by using optimal control methods. It is assumed that the imagery contrast could be described by a transport diffusion equation. The method permits to retrieve an initial field of passive tracer together with surface current velocity from the sequence of images. Examples of processing of AVHRR observations and validations of results have been carried out.

The West African monsoon is the major atmospheric phenomenon which drives the rainfall regime in Western Africa . Therefore, this is the main phenomenon in water resources over the African continent from the equatorial zone to the sub-Saharian one. Obviously, it has a major impact on agricultural activities and thus on the population itself. The causes of inter-annual spatio-temporal variability of monsoon rainfall have not yet been univocally determined. Spatio-temporal changes on the see surface temperature (SST) within the Guinea Gulf and Saharian and Sub-Saharian Albedo are identified by a considerable body of evidences as major factors to explain it.

The aim of this study is to simulate the rainfall by a regional atmospheric model (RAM) and to analyze its sensitivity to the variability of these inputs parameters. Once precipitations from RAM are compared to several precipitation data sets we can observe that the RAM simulates the West African monsoon reasonably.

As mentioned in the previous paragraph, our main goal is to perform a sensitivity analysis for the West African monsoon. Each simulation of the regional atmospheric model (RAM) is time consuming, and we first have to think about a simplified model. We deal her with spatio-temporal dynamics, for which we have to develop functional efficient statistical tools. In our context indeed, both inputs (albedo, SST) and outputs (precipitations) are considered as time and space indexed stochastic processes. Conditionally to the space coordinates, we will perform a functional PCA (principal component analysis), that is we will consider a Karhunen-Love decomposition. The Karhunen-Love representation of a stochastic process is based on the spectral decomposition of its covariance function. Such a representation requires solving an eigenvalue problem in order to determine the eigenfunctions and eigenvalues of the covariance function. When this problem can not be solved analytically, the eigenfunctions are approximated numerically. Other orthogonal bases can be considered. The orthogonality warrants uniqueness of the decomposition. The regression between inputs and outputs can then be approximated by a linear functional regression, extending existing results in . The spatial dependence observed on the data will be transported on the coefficients of the decompositions. In some cases, an additional spatial smoothing as far as local stationarity assumptions will allow to aggregate information on nearby locations.

The study described above still requires rather huge computation resources, and will be run in a grid computing environment which takes into account the scheduling of a huge number of computation requests and links with data-management, all of this as automatically as possible.

These works involve also partners from the INRIA project/team GRAAL for the computational approach, and from the Laboratory of Glaciology and Geophysical Environment (LGGE) for the use and interpretation of the regional atmospheric model (RAM).

Forecasting ocean systems require complex models, which sometimes need to be coupled, and which make use of data assimilation. The objective of this project is, for a given output of such a system, to identify the most influential parameters, and to evaluate the effect of uncertainty in input parameters on model output. Existing stochastic tools are not well suited for high dimension problems (in particular time-dependent problems), while deterministic tools are fully applicable but only provide limited information. So the challenge is to gather expertise on one hand on numerical approximation and control of Partial Differential Equations, and on the other hand on stochastic methods for sensitivity analysis, in order to develop and design innovative stochastic solutions to study high dimension models and to propose new hybrid approaches combining the stochastic and deterministic methods.

A first task was to review the literature on both deterministic and stochastic methods for sensitivity analysis, in order to clarify the main advantages and drawbacks of each tool. This task has already been initiated by Jean-Yves Tissot during his internship (4 months during spring 2009). Jean-Yves has also implemented various methods on a linearized one-dimensional Bürgers model. On this very simple model, Jean-Yves could confirm (or not in some cases) the conclusions of the literature review. However, other implementations have to be conducted, first on this model, but also on more general models such as Shallow-Water models or in a much more complex system issued from the NEMO ocean model.

An approach we would like to focus on is also model reduction. To be more precise concerning model reduction, the aim is to reduce the number of unknown variables (to be computed by the model), using a well chosen basis. Instead of discretizing the model over a huge grid (with millions of points), the state vector of the model is projected on the subspace spanned by this basis (of a far lesser dimension). The choice of the basis is of course crucial and imply the success or failure of the reduced model. Various model reduction methods offer various choices of basis functions. A well-known method is called “proper orthogonal decomposition" or “principal component analysis". More recent and sophisticated methods also exist and may be studied, depending on the needs raised by the theoretical study. Model reduction is a natural way to overcome difficulties due to huge computational times due to discretizations on fine grids. A PhD student, Alexandre Janon, is actually working in this direction, first on very simple one dimensional Shallow Water models. The generalization to models issued from NEMO is an interesting but complex perspective, which should be first conducted by implementations.

Basicaly geophysical models are suffering of two types of errors:

errors in the model itself due to approximations of physical processes and their subgrid parametrization and also errors linked to the necessary numerical discretization.

Errors in the observation because of the measurements and also errors due to sampling. For instance, many remote sensings observe only radiances, which are transformed into the state variables thanks to complex processes like the resolution of an inverse problem. This is, of course, a source of errors.

Estimating the propagation of errors is an important and costly (in term of computing resources) task for two reasons:

the quality of the forecast must be estimated

the estimation of the statistics of errors has to be included in the analysis to have an adequate norm, based on these statistics, on the forecast and also on the observation.

In the variational framework, models, observations, statistics are linked into the optimality system which can be considered as a “generalized" model containing all the available estimation. In works , , the estimation of error covariances are estimated both from the second order analysis and the Hessian of the cost function. Numerical experiments have been carried out on a non-linear 1-D model, we expect to extent the numerical experiments to a semi-operational model in cooperation with ECMWF.

In collaboration with C. Ritz (CNRS, Laboratoire de Glaciologie et Geophysique de l'Environnement, Grenoble), we aim to develop adjoint methods for ice cap models.

In the framework of global warming, the evolution of sea level is a major but ill-known impact. It is difficult to validate the models which are used to predict the sea level elevation, because observations are heterogeneous and sparse.

Data acquisition in polar glaciology is difficult and expensive. Satellite data have a good spatial coverage, but they allow only indirect observation of the interesting data. We wish to make the most of all available data and evaluate where/when/what we have to add new observations. Sensitivity analysis, and in particular the adjoint method, allows to identify the most influential parameters and variables and can help to design the observation network.

The objective of this work is to develop the adjoint code of the polar ice cap model (with code differentiation and transposition methods), in order to evaluate the sensitivity of the model to various parameters, such as the boundary conditions, ice rheology... This work has just been funded by the ANR-ADAGe project.

Dating ice matrix and gas bubbles of ice-cores is essential to study paleoclimates. The conjunction of information brought by observations and flow models is now a commonly used approach to build the chronology of ice cores. Till now this technique has been applied: 1) to one core at a time, 2) to estimate the age of the ice but not of the gas (which is younger), 3) under the assumption of perfect glaciological models after the optimization of their parameters. This currently used methodology faces three problems: 1) for distinct cores the chronologies calculated separately usually show discrepancies, 2) chronologies sometimes fail to respect relevant data constraints precisely because models are imperfect (not well understood physical processes are omitted), at last 3) the gas and ice ages are not independent entities and some valuable observations contain information on both. To go beyond these restrictions B. Lemieux-Dudon has proposed in her PhD a new inverse approach which takes into account the modelling errors. It aims at identifying the accumulation rate, the total thinning function and the close-off depth of ice equivalent (i.e. depth below the surface where the gas is trapped) which are in best agreement with some prior guesses and with independent observations. This method operates on several cores simultaneously by the mean of stratigraphic links relating the gas or ice phase of two cores. The Bayesian framework of this method also enables to associate confidence intervals to the solution. This approach is applied to derive simultaneously a common age scale for the North Grip core and for the two EPICA cores (DML and DC).

In collaboration with TOSCA (Inria Sophia-Antipolis), LMD (Ecole Polytechnique) and CETE (Clermont-Ferrand), we investigate a new method for the numerical simulation of the wind at small
scales. Thanks to boundary data provided at large scales by the weather forecasting code
MM5, we propose a Langevin model that rules the behavior of stochastic particles. The development of this model, called
SDM(Stochastic Downscaling Method), is funded by ADEME (Agence de Développement de l'Écologie et de la Maîtrise de l'Énergie).

After spending 6 months (January-June) in Sophia-Antipolis (EPI TOSCA), Claire Chauvin joined MOISE in July 2008. Together with Antoine Rousseau and co-authors, they obtained
some encouraging numerical results on
SDMthat have been published in
. Since then, a review paper has been submitted with co-authors of
TOSCA and new investigations started in order to compare numerical results provided by
SDMand outputs obtained thanks to large eddy simulations (LES).

In collaboration with G. Lebeau (Université de Nice), we investigate a new method to solve the problem of distributed control of the wave equations over a bounded domain. This problem has been addressed by J.- L. Lions and the HUM method is well-known. Following theoretical new results by B. Dehman and G. Lebeau, we performed an experimental study of the properties of the HUM control operator.

A paper with new theoretical and experimental numerical results has been accepted for publication, see .

Ongoing in 2009:

A 2-year (2008-2009) contract with MERCATOR on the thematic "Variational data assimilation for OPA/NEMO".

A 2-year (2009-2010) contract with CNES on the thematic "development of the tangent and adjoint models for NEMO".

A 3-year contract with ADEME on the thematic "Stochastic Downscaling Method": see .

A 1-year contract with Alyotech on the thematic “Integration of the AGRIF software in the HYCOM ocean model"

F.X. Le Dimet and É. Blayo are responsible for numerical modelling within a regional project (Région Rhône-Alpes) "
**Envirhonalp**" 2005-2010. This project aims at gathering physicists, engineers and applied mathematicians to provide improved modelling and decision tools for environmental
processes.

É. Blayo is a member of the scientific committee of the Pole Grenoblois des Risques Naturels.

A project within the GRAVIT (Grenoble Alpes Valorisation Innovation Technologies) framework on "Numerical Computations on heterogeneous clusters and application to weather forecasting".

CERFACS .

LGGE Grenoble, Glaciology Department (C. Ritz, O. Gagliardini), see paragraph .

A. Rousseau participates to the GdR MOAD (Modelling Asymptotics and nonlinear Dynamics) managed by S. Benzoni.

D. Auroux participates to the GdR MOMAS (Modélisations Mathématiques et Simulations numériques liées aux problèmes de gestion des déchets nucléaires).

É. Blayo, A. Rousseau, M. Nodet, F.-X. Le Dimet, J.-Y. Tissot and A. Janon are involved in GDR Mascot Num in which C. Prieur is a Committee Member.

M. Nodet is involved in GDR Calcul.

F.X. Le Dimet is in charge of the project ADDISA (
http://

D. Auroux is in charge of the project PROSSDAG (Probing new sequential schemes for retrospective data assimilation in geophysics) supported by ANR. This project started in November 2007 and will end in November 2010.

D. Auroux is involved in “ADTAO” project (Assimilation de données dans le systme Terre Atmosphère Océan), supported by RTRA - Fondation STAE (2009-2012).

M. Nodet and D. Auroux are involved in Jacques Blum's project "Un nouvel observateur: le back and forth nudging (BFN) - Études théoriques, numériques et applications" supported by INSU-LEFE.

D. Auroux is involved in Jacques Blum's project "Nudging direct et rétrograde pour l'assimilation de données océanographiques: application à un modèle aux équations primitives" supported by INSU-LEFE.

M. Nodet is in charge of a 2-year contract with LEFE-INSU on the thematic "Assimilation of lagrangian data in OPAVAR": see .

A 12 months contract with IFREMER on the thematic "Numerical Methods in Ocean Modelling".

A 3 years ANR contract: ANR MSDAG (Multiscale Data Assimilation in Geophysics) see paragraph )

A 4-year ANR contract: ANR ADAGe (Adjoint ice flow models for Data Assimilation in Glaciology, see paragraph ) .

A 4-year ANR contract: ANR Geo-FLUIDS (Fluid flows analysis and simulation from image sequences: application to the study of geophysical flows, see paragraph ) .

A 4-year ANR contract: ANR VODA (Variational ocean data assimilation)

A. Vidard leads a group of projects gathering multiple partners in France and UK on the topic "Variational Data Assimilation for the NEMO/OPA9 Ocean Model", see . This project is granted by two INSU-LEFE and a Mercator-Ocean calls for proposals

A. Vidard is coordinator of the ANR VODA (Variational Ocean Data Assimilation for multi-scales applications) project

É. Blayo is the chair or the CNRS research program LEFE-ASSIM on data assimilation.

F.-X. Le Dimet collaborates with I. Gejadze (Dept. of Civil Engineering, University of Strathclyde, Scotland) and V. Shutyaev (Institute of Numerical Mathematics, Russian Academy of Sciences) on propagation and control of the error in data assimilation and on evaluation of error covariance by deterministic method.

A. Vidard collaborates with ECMWF (Reading, UK) on the development of a variational data assimilation system for the NEMO ocean model.

A. Vidard, F.-X. Le Dimet, I. Souopgui and O. Titaud are part of the ADAMS associated team and the ECO-NET project ADOMENO, both co-led by A. Vidard and E. Huot (CLIME project team). They gather scientists from INRIA (MOISE and CLIME), MHI (Sevastopol, Ukraine), INM (Moscow, Russia) and MNI (Tbilissi, Georgia) and aim at developing advanced data assimilation methods applied to the Black Sea.

M. Nodet is involved in European GDR CONEDP (Control of Partial Differential Equation, see paragraph .

L. Debreu, F.X. Le Dimet, A Vidard O. Titaud, E. Neveu and I. Souopgui are involved in the international project ADDISAAF (ADDISA for Africa), coordinated by E . Kamgnia (University of Yaoundé I) and I. Herlin (INRIA Clime). Other partner is: Ecole Nationale d'Ingénieurs de Tunis, Tunisia.

There also exists a strong cooperation on this theme with China (Institute of Atmospheric Physics of the Chinese A.S.) and Vietnam (Institute of Mathematics and Institute of Mechanics of the Vietnamese A.S.).

MOISE belongs to the SARIMA project for cooperation in computer Sciences and Applied Mathematics between France and Africa. This project funds the PhD of Innocent Souopgui, which started in 2006.

D. Auroux is involved in an IFCPAR project (Indo-French center for the promotion of advanced research) and a collaboration with India (Indian Institute of Science, Bangalore). This project deals with the control and forecast of systems of partial differential equations and started in October 2007.

A. Rousseau collaborates with Roger Temam (French Academy of Sciences and Indiana University) on the theoretical and numerical studies of open boundary conditions for the primitive equations of the ocean: see . He spent 4 weeks at Indiana University, Bloomington (april 2009) on the invitation of Roger Temam.

M. Nodet and A. Vidard just started a collaboration with K. Ide (University of Maryland, formerly UCLA) about Lagrangian and Images data assimilation, see paragraph et .

É. Blayo, L. Debreu, M. Nodet participate to the INRIA Program “Equipes associées", creating the joint team NMOM (Numerical Methods for Ocean Modelling) with UCLA, Department of Atmospheric and Oceanic Sciences (J.McWilliams), and CICESE, Ensenada (J. Scheinbaum).

É. Blayo, and Arthur Vidard participate to the INRIA Program “Equipes associées", creating the joint team ADAMS (Méthodes avancées d'assimilation des données de la Mer) with MHI, Ukraine (G. Korotaev), INM, Russia, (V. Shutyaev) and M. Nodia Geophysical Institute, Georgia, (A. Kordzadze).

A. Rousseau is in charge of EDP/MOISE weekly seminar, see
http://

A. Rousseau participated in a recruiting workshop at MIT with C. Puech and A. Theis-Viémont. The
*European Career Fair*is an annual recruiting event, organized by the MIT European Club, that connects employers from Europe with the most talented candidates that live in the US.

A. Rousseau was in charge of the evaluation of applications issued in Rhône-Alpes region for the national "Fête de la Science" 2009.

M. Nodet and Ch. Kazantsev are in charge of MOISE fortnightly workshop, see
http://

M. Nodet was a member of the comittee for Compiegne University Assistant Professor Recruitement Campaign 2009.

Half of the team members are faculty, and give lectures in the Master in applied mathematics of the Joseph Fourier University and the Institut National Polytechnique de Grenoble (ENSIMAG). The non-faculty (INRIA/CNRS) members of the project-team also participate to teaching activities.

É. Blayo and A. Vidard gave, in collaboration with E. Cosme (LEGI), a doctoral one-week course on data assimilation. 40 participants, January 2009.

F.-X. Le Dimet has delivered a serie of lecture on " Data Assimilation in Hydrology" at the Institute of Mechanics on the invitation of the Academy of Sciences of Vietnam. May 2009.

F.X. Le Dimet has delivered a lecture on Introduction to Data Assimialtion Uiversity of Yaoundé, March 2009.

F.X. Le Dimet invitations and seminars at:

University of Yaoundé, Camerron, Invited Professor (1 week)

Florida State University, July 2009, Invited Professor (2 weeks)

University of Wuhan (China), Invited Professor (1 week) September 2009.

Chinese University of Hong-Kong, October 2009-11-27, Invited Professor (1 week)

Strathclyde University in Glasgow (UK) November 2009 Invited Professor (1 week)

F.-X. Le Dimet has been nominated as “Courtesy Professor" at Florida State University

The members of the team have participated to various conferences and workshops (see the bibliography).