EN FR
EN FR


Section: Research Program

Research directions

The project will develop along the following three axes:

  • dynamics of novel (unconventional) PDE models, accounting for uncertainty

  • optimization and control algorithms for systems governed by PDEs,

which are common to the specific problems treated in the applications. In this section, we detail the methodological tools that we plan to develop in the framework of this project.

Dynamics of novel PDE models

Dynamical models consisting of evolutionary PDEs, mainly of hyperbolic type, appear classically in the applications studied by the previous Project-Team Opale (compressible flows, traffic, cell-dynamics, medicine, etc). Yet, the classical purely macroscopic approach is not able to account for some particular phenomena related to specific interactions occurring at smaller scales. These phenomena can be of greater importance when dealing with particular applications, where the "first order" approximation given by the purely macroscopic approach reveals to be inadequate. We refer for example to self-organizing phenomena observed in pedestrian flows  [104] , or to the dynamics of turbulent flows for which large scale / small scale vortical structures interfere  [131] .

Nevertheless, macroscopic models offer well known advantages, namely a sound analytical framework, fast numerical schemes, the presence of a low number of parameters to be calibrated, and efficient optimization procedures. Therefore, we are convinced of the interest of keeping this point of view as dominant, while completing the models with information on the dynamics at the small scale / microscopic level. This can be achieved through several techniques, like hybrid models, homogenization, mean field games. In this project, we will focus on the aspects detailed below.

Micro-macro couplings

Modeling of complex problems with a dominant macroscopic point of view often requires couplings with small scale descriptions. Accounting for systems heterogeneity or different degrees of accuracies. usually leads to coupled PDE-ODE systems.

In the case of heterogeneous problems the coupling is "intrinsic", i.e. the two models evolves together and mutually affect each-other. For example, accounting for the impact of a large and slow vehicle (like a bus or a truck) on traffic flow leads to a strongly coupled system consisting of a (system of) conservation law(s) coupled with an ODE describing the bus trajectory, which acts as a moving bottleneck.The coupling is realized through a local unilateral constraint on the flow at the bus location, see  [78] for an existence result and  [63] , [77] for numerical schemes.

If the coupling is intended to offer higher degree of accuracy at some locations, a macroscopic and a microscopic model are connected through an artificial boundary, and exchange information across it through suitable boundary conditions. See  [69] , [97] for some applications in traffic flow modelling.

We plan to pursue our activity in this framework, also extending the above mentioned approaches to problems in two or higher space dimension, to cover applications to crowd dynamics or fluid-structure interaction.

Non-local flows

Non-local interactions can be described through macroscopic models based on integro-differential equations. Systems of the type

t u + div 𝐱 F ( t , 𝐱 , u , W ) = 0 , t > 0 , 𝐱 d , d 1 , (1)

where u=u(t,𝐱)N, N1 is the vector of conserved quantities and the variable W=W(t,x,u) depends on an integral evaluation of u, arise in a variety of physical applications. Space-integral terms are considered for example in models for granular flows [50] , sedimentation [56] , supply chains [99] , conveyor belts [100] , biological applications like structured populations dynamics  [121] , or more general problems like gradient constrained equations [51] . Also, non-local in time terms arise in conservation laws with memory, starting from [75] . In particular, equations with non-local flux have been recently introduced in traffic flow modeling to account for the reaction of drivers or pedestrians to the surrounding density of other individuals, see [33] , [37] [62] , [66] , [134] . While pedestrians are likely to react to the presence of people all around them, drivers will mainly adapt their velocity to the downstream traffic, assigning a greater importance to closer vehicles. In particular, and in contrast to classical (without integral terms) macroscopic equations, these models are able to display finite acceleration of vehicles through Lipschitz bounds on the mean velocity [33] , [37] and lane formation in crossing pedestrian flows.

General analytical results on non-local conservation laws, proving existence and eventually uniqueness of solutions of the Cauchy problem for (1 ), can be found in  [52] for scalar equations in one space dimension (N=d=1), in  [67] for scalar equations in several space dimensions (N=1, d1) and in [30] , [68] , [72] for multi-dimensional systems of conservation laws. Besides, specific finite volume numerical methods have been developed recently in [30] , [37] and [109] .

Relying on these encouraging results, we aim to push a step further the analytical and numerical study of non-local models of type (1 ), in particular concerning well-posedness of initial - boundary value problems, numerical schemes and micro-macro limits.

Measure-valued solutions

The notion of measure-valued solutions for conservation laws was first introduced by DiPerna  [80] , and extensively used since then to prove convergence of approximate solutions and deduce existence results, see for example  [88] and references therein. Measure-valued functions have been recently advocated as the appropriate notion of solution to tackle problems for which analytical results (such as existence and uniqueness of weak solutions in distributional sense) and numerical convergence are missing  [54] , [91] . We refer, for example, to the notion of solution for non-hyperbolic systems [36] , for which no general theoretical result is available at present, and to the convergence of finite volume schemes for systems of hyperbolic conservation laws in several space dimensions, see  [91] .

In this framework, we plan to investigate and make use of measure-based PDE models for (vehicular and pedestrian) traffic flows. Indeed, a modeling approach based on (multi-scale) time-evolving measures (expressing the agents probability distribution in space) has been recently introduced (see the monograph  [74] ), and proved to be successful for studying emerging self-organised flow patterns  [73] . As mentioned in Section  3.1.1.2 , the theoretical measure framework proves to be also relevant in addressing micro-macro limiting procedures [45] . We recall that a key ingredient in this approach is the use of the Wasserstein distances  [137] , [138] . Indeed, as observed in [122] , the usual L1 spaces are not natural in this context, since they don't guarantee uniqueness of solutions.

Mean-field games

Mean Field Games (MFG) is a branch of Dynamic Games which aims at deriving macroscopic decision processes involving a very large population of indistinguishable, interacting rational agents, who have limited information on the game, and choose their optimal strategy depending on the available macroscopic information on the action of the other players (see the seminal paper  [112] ).

Even if we haven't actively contributed to this sector up to now, we believe that this approach is promising for some of the applications targeted by the team, such as traffic flow and cell dynamics. Our aim is to consider situations where N agents interact in other ways than through non-linear dynamical stochastic processes. In cell dynamics, interaction processes would be contact-inhibition, linear elastic transmission in cell-cell interaction, normal stress transmission in cell-matrix, etc; in traffic modelling, through the velocity field. The classical mean field game approach provides an approximation of the Nash, or rational expectation, equilibrium behaviour through solutions of a suitable Hamilton-Jacobi-Bellman PDE of the form

t φ ( t , x ) + H ( t , x , φ ( x , t ) , ρ ) = 0 t > 0 , x d ,

with, possibly, a diffusion term. To the HJB equation above, one must add the limit equation (e.g. Fokker-Planck) which describes the evolution of the density of the agents ρ. The resulting system is fully coupled because the Hamiltonian H depends on the macroscopic state variable ρ and, in turn, the viscosity solution φ affects the second equation, for example through the selection of the velocity field. For our applications, the agents are postulated to obey to more or less complex PDEs, like the second order elliptic equations (describing elastic behavior of cells controlled by the distribution of their F-actin network) or conservation laws (accounting for mass conservation for pedestrians), and one must compare these popular models to those derived as limit of Nash equilibria of non-cooperative non-atomic anonymous games with a continuum of players (a.k.a. MFG), notably through numerical simulations  [47] .

In the context of crowd dynamics, including the case of several populations with different targets, the mean field game approach has been adopted in  [58] , [59] , [81] , [111] , under the assumption that the individual behavior evolves according to a stochastic process, which gives rise to parabolic equations greatly simplifying the analysis of the system. Besides, a deterministic context is studied in  [126] , which considers a non-local velocity field.

For cell dynamics, in order to take into account the fast processes that occur in the migration-related machinery, a framework such the one developed in   [76] to handle games "where agents evolve their strategies according to the best-reply scheme on a much faster time scale than their social configuration variables" may turn out to be suitable.

Even though the proposed MFG models appear promising and encourage the application of mean field games theory to crowd motion, traffic flows, and cell dynamics, this research topic is still at an early stage and many things remain to be done. From the analytical point of view, while existence for the mean field game equations is quite established, uniqueness is still an issue. Moreover, a qualitative study of solutions with analytical tools is completely missing, and interesting phenomena (segregation between populations, lanes formation, coordinated actin networks) are observed in numerical or biological experiments only. Another challenge is represented by handling bounded domains. Finally, up to our knowledge, MFG have not been implemented to study vehicular traffic nor cell dynamics. An attempt in this direction could be to consider a mean field system on a graph  [61] .

Uncertainty in parameters and initial-boundary data

Different sources of uncertainty can be identified in PDE models, related to the fact that the problem of interest is not perfectly known. At first, initial and boundary condition values can be subject to uncertainty. For instance, in traffic flows, the time-dependent value of inlet and outlet fluxes, as well as the initial distribution of vehicles density, are not perfectly determined [60] . In aerodynamics, inflow conditions like velocity, temperature and pressure are subject to fluctuations depending on atmospheric conditions [102] , [120] . For some engineering problems, the geometry of the boundary can also be uncertain, due to structural deformation, mechanical wear or disregard of some detail [82] . Another source of uncertainty is related to the value of some parameters in the PDE models. This is typically the case of parameters in turbulence models in fluid mechanics, which have been calibrated according to some reference flows but are not universal  [132] , [136] , or in traffic flow models, which may depend on the type of road, weather conditions, or even the country of interest (due to differences in driving rules and conductors behaviour). This leads to equations with flux functions depending on random parameters [133] , [135] , for which the mean and the variance of the solutions can be computed using different techniques. Indeed, uncertainty quantification for systems governed by PDE systems has become a very active research topic in the last years. Most approaches are embedded in a probabilistic framework and aim at quantifying statistical moments of the PDE solutions, under the assumption that the characteristics of uncertain parameters are known. Note that classical Monte-Carlo approaches exhibit low convergence rate and consequently accurate simulations require huge computational times. In this respect, some enhanced algorithms have been proposed, for example in the balance law framework [117] . Different approaches propose to modify the PDE solvers to account for this probabilistic context, for instance by defining the non-deterministic part of the solution on an orthogonal basis (Polynomial Chaos decomposition) and using a Galerkin projection  [102] , [108] , [114] , [140] or an entropy closure method  [79] , or by discretizing the probability space and extending numerical schemes  [46] . Alternatively, some other approaches maintain a fully deterministic PDE resolution, but approximate the solution in the vicinity of the reference parameter values by Taylor series expansions based on first- or second-order sensitivities  [127] , [136] , [139] .

Our objective regarding this topic is twofold. In a pure modeling perspective, we aim at including uncertainty quantification in models calibration and validation for predictive use. In this case, the choice of the techniques will depend on the specific problem considered [55] . Besides, we intend to keep on our developments based on sensitivity analysis by extending previous works  [82] to more complex and more demanding problems. Concerning aerodynamic applications, unsteady flows will be considered. Moreover, we plan to improve the capture of shock moves due to uncertainties. This difficult topic requires a modification of the sensitivity equation method to account for Dirac distributions arising from the discontinuity in the solution. A second targeted topic is the study of the uncertainty related to turbulence closure parameters, in the context of detached flows for which a change of flow topology may arise. Our ambition is to contribute to the emergence of a new generation of simulation tools, which will provide solution densities rather than values, to tackle real-life uncertain problems.

Optimization and control algorithms for systems governed by PDEs

The focus here is on the methodological development and analysis of optimization algorithms and paradigms for PDE systems in general, keeping in mind our privileged domains of application in the way the problems are mathematically formulated.

Robust design and control

Most physical phenomena are subject to uncertainties, arising from random variations of physical parameters, and affecting the design performance. Therefore, a major challenge in design optimization is the capability to account for various forms of uncertainty and a large number of uncertain parameters. Most approaches found in the literature rely on a statistical framework, by solving optimization problems based on statistical quantities (expectation, variance, value-at-risk, etc) as criteria  [106] , [113] , [118] , [127] . These approaches clearly outclass multi-point approaches, used in the industry for a long time, which simply aggregate objectives computed for different parameter values  [119] , [144] .

We intend to develop and test a new methodology for robust design that will include uncertainty effects. More precisely, we propose to employ the Multiple-Gradient Descent Algorithm (MGDA) (see Section 3.1.2 ) to achieve an effective improvement of all criteria simultaneously, which can be of statistical nature or discrete functional values evaluated in confidence intervals of parameters. Some recent results obtained at ONERA  [124] by a stochastic variant of our methodology confirm the viability of the approach. A PhD thesis has also been launched at ONERA/DADS.

Hierarchical methods

Solving optimization problems including real-life applications requires more sophisticated strategies than the simple use of an optimizer coupled with a PDE solver. The analysis of industrial problems are more and more based on a set of simulation tools, of variable fidelity and computational cost, possibly coupled and used in conjunction with experimental data. Conducting a rigorous optimization task in this context is still a challenge.

Therefore, we intend to continue our developments of hierarchical strategies, based either on a hierarchy of models or a hierarchy of design variables. The coordination within an optimization loop, of a high-fidelty (phenomenological) model with a low-fidelity (statistical) model  [110] , associated with a hierarchical parameter basis  [115] (e.g. through CAD representation of the geometry) has revealed very effective during the former Opale activity [85] .

More precisely, we propose to develop a methodology based on Gaussian Process (meta-)models to account for multi-fidelity evaluations, error (spatial discretization, temporal integration, etc.) and uncertainty of numerical solvers  [93] , [123] and possibly including experimental data. Besides, the study of the recently proposed isogeometric analysis paradigm  [71] , based on the integration of geometry and analysis using a unique hierarchical representation, will be pursued in collaboration with AROMATH Team  [141] .

Multi-objective descent algorithms for multi-disciplinary, multi-point, unsteady optimization or robust-design

In differentiable optimization, multi-disciplinary, multi-point, unsteady optimization or robust-design can all be formulated as multi-objective optimization problems. In this area, we have proposed the Multiple-Gradient Descent Algorithm (MGDA) to handle all criteria concurrently [83] [84] . Originally, we have stated a principle according which, given a family of local gradients, a descent direction common to all considered objective-functions simultaneously is identified, assuming the Pareto-stationarity condition is not satisfied. When the family is linearly-independent, we dispose of a direct algorithm. Inversely, when the family is linearly-dependent, a quadratic-programming problem should be solved. Hence, the technical difficulty is mostly conditioned by the number m of objective functions relative to the search space dimension n. In this respect, the basic algorithm has recently been revised [86] to handle the case where m>n, and even mn, and is currently being tested on a test-case of robust design subject to a periodic time-dependent Navier-Stokes flow.

The multi-point situation is very similar and, being of great importance for engineering applications, will be treated at large.

Lastly, we note that in situations where gradients are difficult to evaluate, the method can be assisted by a meta-model [143] .

Constraint elimination in Quasi-Newton methods

In single-objective differentiable optimization, Newton's method requires the specification of both gradient and Hessian. As a result, the convergence is quadratic, and Newton's method is often considered as the target reference. However, in applications to distributed systems, the functions to be minimized are usually 'functionals', and even through a parameterization by means of a finite set of parameters, as it may be the case in shape-optimization, these functionals depend on the optimization variables by the solution of an often complex set of PDE's, through a chain of computational procedures. Various approaches exist to calculate the gradients : (i) by the simultaneous simulation of an adjoint system, that is linear, but equal or superior in complexity to the original system, and ill-conditioned; (ii) by automatic differentiation (e.g. by Tapenade ); (iii) sometimes by carefully-calibrated finite-differences. Calculating second derivatives is even more difficult. For example in automatic differentiation, the gradient is often calculated in the inverse mode, and then, for each "directional second-derivative" the direct mode is applied to it. For n variables, there are a total of n(n+1)/2 second derivatives. Hence, the exact calculation of the full Hessian is a complex and costly computational endeavor, often making Newton's method a chimera.

This has fostered the development of "quasi-Newton's methods" that mimic Newton's method but use only the gradient, the Hessian being iteratively constructed by successive approximations inside the algorithm itself. Among such methods, the Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) is well-known and commonly employed. In this method, the Hessian is corrected at each new iteration by rank-one matrices defined from several evaluations of the gradient only. The BFGS method has "super-linear convergence". When the minimized function is quadratic in n variables, full convergence is achieved in n+1 iterations, so long as one-dimensional minimizations are carried out to full convergence, that is, exactly. For a general smooth functions, after a first phase of Hessian build-up, abrupt convergence is observed.

However, another difficulty arises in the application of the BFGS algorithm. In its standard formulation, it is devised for unconstrained optimization, while practical problems originating from engineering or sciences involve constraints, sometimes in large number. Certain authors have developed so-called "Riemannian BFGS", e.g. [129] . that have the desirable convergence property in constrained problems. However, in this approach, the constraints are assumed to be known formally, by explicit expressions. Again, such a restriction is not practical in case of functionals.

Our research is done in collaboration with ONERA-Meudon. We are exploring the possibility of representing constraints, in successive iterations, through local approximations of the constraint surfaces, splitting the design space locally into tangent and normal sub-spaces, and eliminating the normal coordinates through a linearization, or more generally a finite expansion, and applying the BFGS method through dependencies on the coordinates in the tangent subspace only. Preliminary experiments on the difficult Rosenbrock test-case, although in low dimensions, demonstrate the feasibility of this approach. On-going research is on theorizing this method, and testing cases of higher dimensions.

Decentralized strategies towards efficient thermoelastography

Thermoelastography is an innovative non-invasive control technology, which has numerous advantages over other techniques, notably in medical imaging [116] . Indeed, it shows more robustness than thermography alone, a technology that is promoted in e.g. detection and monitoring of breast cancer though it does not prove to be reliably effective and many national health organizations (http://cancerscreening.gov.au/internet/screening/publishing.nsf/Content/br-policy-thermography) do not advise it as a substitute to e.g. mammography. It is well known that most pathological changes are associated with changes in tissue stiffness, while remaining isoechoic, and hence difficult to detect by ultrasound techniques. As palpation is an effective method for lesion detection, popular among clinicians, thermoelastography turns out to be of a high efficiency potential. Based on elastic waves and heat flux reconstruction, thermoelastography shows no destructive or aggressive medical sequel, unlike X-ray and comparables techniques, making it a prominent choice by patients.

Physical principles of thermoelastography originally rely on dynamical structural responses of tissues, but in a simplified first approach, we only consider static responses of linear elastic structures.

The mathematical formulation of the thermoelasticity reconstruction is based on data completion and material identification, making it a harsh ill posed inverse problem. We have demonstrated in previous works  [22] , [23] that Nash game approaches are efficient to tackle ill-posedness.

With our collaborators M. Kallel and R. Chamekh (Ph.D.) ((Université de Tunis), we intend to extend the results obtained for Laplace equations in  [22] to the following problems (of increasing difficulty) :

  • Simultaneous data and parameter recovery in linear elasticity, using the so-called Kohn and Vogelius functional (ongoing work, some promising results obtained).

  • Data recovery in coupled heat-thermoelasticity systems.

  • Data recovery in linear thermoelasticity under stochastic heat flux, where the imposed flux is stochastic.

  • Data recovery in coupled heat-thermoelasticity systems under stochastic heat flux, formulated as an incomplete information Nash game.

  • Application to robust identification of cracks.