Section: Research Program
Research directions
The project develops along the following two axes:

modeling complex systems through novel (unconventional) PDE systems, accounting for multiscale phenomena and uncertainty;

optimization and optimal control algorithms for systems governed by the above PDE systems.
These themes are motivated by the specific problems treated in the applications, and represent important and uptodate issues in engineering sciences. For example, improving the design of transportation means and civil buildings, and the control of traffic flows, would result not only in better performances of the object of the optimization strategy (vehicles, buildings or road networks level of service), but also in enhanced safety and lower energy consumption, contributing to reduce costs and pollutant emissions.
PDE models accounting for multiscale phenomena and uncertainties
Dynamical models consisting of evolutionary PDEs, mainly of hyperbolic type, appear classically in the applications studied by the previous ProjectTeam Opale (compressible flows, traffic, celldynamics, medicine, etc). Yet, the classical purely macroscopic approach is not able to account for some particular phenomena related to specific interactions occurring at smaller scales. These phenomena can be of greater importance when dealing with particular applications, where the "first order" approximation given by the purely macroscopic approach reveals to be inadequate. We refer for example to selforganizing phenomena observed in pedestrian flows [95], or to the dynamics of turbulent flows for which large scale / small scale vortical structures interfere [123].
Nevertheless, macroscopic models offer well known advantages, namely a sound analytical framework, fast numerical schemes, the presence of a low number of parameters to be calibrated, and efficient optimization procedures. Therefore, we are convinced of the interest of keeping this point of view as dominant, while completing the models with information on the dynamics at the small scale / microscopic level. This can be achieved through several techniques, like hybrid models, homogenization, mean field games. In this project, we will focus on the aspects detailed below.
The development of adapted and efficient numerical schemes is a mandatory completion, and sometimes ingredient, of all the approaches listed below. The numerical schemes developed by the team are based on finite volumes or finite elements techniques, and constitute an important tool in the study of the considered models, providing a necessary step towards the design and implementation of the corresponding optimization algorithms, see Section 3.1.2.
Micromacro couplings
Modeling of complex problems with a dominant macroscopic point of view often requires couplings with small scale descriptions. Accounting for systems heterogeneity or different degrees of accuracy usually leads to coupled PDEODE systems.
In the case of heterogeneous problems the coupling is "intrinsic", i.e. the two models evolve together and mutually affect eachother. For example, accounting for the impact of a large and slow vehicle (like a bus or a truck) on traffic flow leads to a strongly coupled system consisting of a (system of) conservation law(s) coupled with an ODE describing the bus trajectory, which acts as a moving bottleneck.The coupling is realized through a local unilateral moving constraint on the flow at the bus location, see [64] for an existence result and [49], [63] for numerical schemes.
If the coupling is intended to offer higher degree of accuracy at some locations, a macroscopic and a microscopic model are connected through an artificial boundary, and exchange information across it through suitable boundary conditions. See [55], [84] for some applications in traffic flow modelling, and [74], [79], [81] for applications to cell dynamics.
The corresponding numerical schemes are usually based on classical finite volume or finite element methods for the PDE, and Euler or RungeKutta schemes for the ODE, coupled in order to take into account the interaction fronts. In particular, the dynamics of the coupling boundaries require an accurate handling capturing the possible presence of nonclassical shocks and preventing diffusion, which could produce wrong solutions, see for example [49], [63].
We plan to pursue our activity in this framework, also extending the above mentioned approaches to problems in two or higher space dimensions, to cover applications to crowd dynamics or fluidstructure interaction.
Micromacro limits
Rigorous derivation of macroscopic models from microscopic ones offers a sound basis for the proposed modeling approach, and can provide alternative numerical schemes, see for example [56], [66] for the derivation of LighthillWhithamRichards [107], [122] traffic flow model from FollowtheLeader and [75] for results on crowd motion models (see also [97]). To tackle this aspect, we will rely mainly on two (interconnected) concepts: measurevalued solutions and meanfield limits.
The notion of measurevalued solutions for conservation laws was first introduced by DiPerna [67], and extensively used since then to prove convergence of approximate solutions and deduce existence results, see for example [76] and references therein. Measurevalued functions have been recently advocated as the appropriate notion of solution to tackle problems for which analytical results (such as existence and uniqueness of weak solutions in distributional sense) and numerical convergence are missing [38], [78]. We refer, for example, to the notion of solution for nonhyperbolic systems [86], for which no general theoretical result is available at present, and to the convergence of finite volume schemes for systems of hyperbolic conservation laws in several space dimensions, see [78].
In this framework, we plan to investigate and make use of measurebased PDE models for vehicular and pedestrian traffic flows. Indeed, a modeling approach based on (multiscale) timeevolving measures (expressing the agents probability distribution in space) has been recently introduced (see the monograph [60]), and proved to be successful for studying emerging selforganised flow patterns [59]. The theoretical measure framework proves to be also relevant in addressing micromacro limiting procedures of mean field type [87], where one lets the number of agents going to infinity, while keeping the total mass constant. In this case, one must prove that the empirical measure, corresponding to the sum of Dirac measures concentrated at the agents positions, converges to a measurevalued solution of the corresponding macroscopic evolution equation. We recall that a key ingredient in this approach is the use of the Wasserstein distances [130], [131]. Indeed, as observed in [114], the usual ${L}^{1}$ spaces are not natural in this context, since they don't guarantee uniqueness of solutions.
This procedure can potentially be extended to more complex configurations, like for example road networks or different classes of interacting agents, or to other application domains, like celldynamics.
Another powerful tool we shall consider to deal with micromacro limits is the socalled Mean Field Games (MFG) technique (see the seminal paper [106]). This approach has been recently applied to some of the systems studied by the team, such as traffic flow and cell dynamics. In the context of crowd dynamics, including the case of several populations with different targets, the mean field game approach has been adopted in [45], [46], [68], [105], under the assumption that the individual behavior evolves according to a stochastic process, which gives rise to parabolic equations greatly simplifying the analysis of the system. Besides, a deterministic context is studied in [118], which considers a nonlocal velocity field. For cell dynamics, in order to take into account the fast processes that occur in the migrationrelated machinery, a framework such the one developed in [62] to handle games "where agents evolve their strategies according to the bestreply scheme on a much faster time scale than their social configuration variables" may turn out to be suitable. An alternative framework to MFG is also considered. This framework is based on the formulation of Nash games constrained by the FokkerPlanck (FP, [36]) partial differential equations that govern the time evolution of the probability density functions PDF of stochastic systems and on objectives that may require to follow a given PDF trajectory or to minimize an expectation functional.
Nonlocal flows
Nonlocal interactions can be described through macroscopic models based on integrodifferential equations. Systems of the type
${\partial}_{t}u+{\text{div}}_{\mathbf{x}}F(t,\mathbf{x},u,W)=0,\phantom{\rule{2.em}{0ex}}t>0,\phantom{\rule{3.33333pt}{0ex}}\mathbf{x}\in {\mathbb{R}}^{d},\phantom{\rule{3.33333pt}{0ex}}d\ge 1,$  (1) 
where $u=u(t,\mathbf{x})\in {\mathbb{R}}^{N}$, $N\ge 1$ is the vector of conserved quantities and the variable $W=W(t,x,u)$ depends on an integral evaluation of $u$, arise in a variety of physical applications. Spaceintegral terms are considered for example in models for granular flows [33], sedimentation [40], supply chains [89], conveyor belts [90], biological applications like structured populations dynamics [113], or more general problems like gradient constrained equations [34]. Also, nonlocal in time terms arise in conservation laws with memory, starting from [61]. In particular, equations with nonlocal flux have been recently introduced in traffic flow modeling to account for the reaction of drivers or pedestrians to the surrounding density of other individuals, see [3], [6] [48], [52], [126]. While pedestrians are likely to react to the presence of people all around them, drivers will mainly adapt their velocity to the downstream traffic, assigning a greater importance to closer vehicles. In particular, and in contrast to classical (without integral terms) macroscopic equations, these models are able to display finite acceleration of vehicles through Lipschitz bounds on the mean velocity [3], [6] and lane formation in crossing pedestrian flows.
General analytical results on nonlocal conservation laws, proving existence and eventually uniqueness of solutions of the Cauchy problem for (1), can be found in [35] for scalar equations in one space dimension ($N=d=1$), in [53] for scalar equations in several space dimensions ($N=1$, $d\ge 1$) and in [29], [54], [58] for multidimensional systems of conservation laws. Besides, specific finite volume numerical methods have been developed recently in [29], [6] and [104].
Relying on these encouraging results, we aim to push a step further the analytical and numerical study of nonlocal models of type (1), in particular concerning wellposedness of initial  regularity of solutions, boundary value problems and highorder numerical schemes.
Uncertainty in parameters and initialboundary data
Different sources of uncertainty can be identified in PDE models, related to the fact that the problem of interest is not perfectly known. At first, initial and boundary condition values can be uncertain. For instance, in traffic flows, the timedependent value of inlet and outlet fluxes, as well as the initial distribution of vehicles density, are not perfectly determined [47]. In aerodynamics, inflow conditions like velocity modulus and direction, are subject to fluctuations [93], [112]. For some engineering problems, the geometry of the boundary can also be uncertain, due to structural deformation, mechanical wear or disregard of some details [70]. Another source of uncertainty is related to the value of some parameters in the PDE models. This is typically the case of parameters in turbulence models in fluid mechanics, which have been calibrated according to some reference flows but are not universal [124], [129], or in traffic flow models, which may depend on the type of road, weather conditions, or even the country of interest (due to differences in driving rules and conductors behaviour). This leads to equations with flux functions depending on random parameters [125], [128], for which the mean and the variance of the solutions can be computed using different techniques. Indeed, uncertainty quantification for systems governed by PDEs has become a very active research topic in the last years. Most approaches are embedded in a probabilistic framework and aim at quantifying statistical moments of the PDE solutions, under the assumption that the characteristics of uncertain parameters are known. Note that classical MonteCarlo approaches exhibit low convergence rate and consequently accurate simulations require huge computational times. In this respect, some enhanced algorithms have been proposed, for example in the balance law framework [111]. Different approaches propose to modify the PDE solvers to account for this probabilistic context, for instance by defining the nondeterministic part of the solution on an orthogonal basis (Polynomial Chaos decomposition) and using a Galerkin projection [93], [102], [108], [133] or an entropy closure method [65], or by discretizing the probability space and extending the numerical schemes to the stochastic components [28]. Alternatively, some other approaches maintain a fully deterministic PDE resolution, but approximate the solution in the vicinity of the reference parameter values by Taylor series expansions based on first or secondorder sensitivities [119], [129], [132].
Our objective regarding this topic is twofold. In a pure modeling perspective, we aim at including uncertainty quantification in models calibration and validation for predictive use. In this case, the choice of the techniques will depend on the specific problem considered [39]. Besides, we plan to extend previous works on sensitivity analysis [70], [109] to more complex and more demanding problems. In particular, highorder Taylor expansions of the solution (greater than two) will be considered in the framework of the Sensitivity Equation Method [41] (SEM) for unsteady aerodynamic applications, to improve the accuracy of mean and variance estimations. A second targeted topic in this context is the study of the uncertainty related to turbulence closure parameters, in the sequel of [129]. We aim at exploring the capability of the SEM approach to detect a change of flow topology, in case of detached flows. Our ambition is to contribute to the emergence of a new generation of simulation tools, which will provide solution densities rather than values, to tackle reallife uncertain problems. This task will also include a reflection about numerical schemes used to solve PDE systems, in the perspective of constructing a unified numerical framework able to account for exact geometries (isogeometric methods), uncertainty propagation and sensitivity analysis w.r.t. control parameters.
Optimization and control algorithms for systems governed by PDEs
The nonclassical models described above are developed in the perspective of design improvement for reallife applications. Therefore, control and optimization algorithms are also developed in conjunction with these models. The focus here is on the methodological development and analysis of optimization algorithms for PDE systems in general, keeping in mind the application domains in the way the problems are mathematically formulated.
Sensitivity VS adjoint equation
Adjoint methods (achieved at continuous or discrete level) are now commonly used in industry for steady PDE problems. Our recent developments [121] have shown that the (discrete) adjoint method can be efficiently applied to cost gradient computations for timeevolving traffic flow on networks, thanks to the special structure of the associated linear systems and the underlying one dimensionality of the problem. However, this strategy is questionable for more complex (e.g. 2D/3D) unsteady problems, because it requires sophisticated and timeconsuming checkpointing and/or recomputing strategies [37], [88] for the backward time integration of the adjoint variables. The sensitivity equation method (SEM) offers a promising alternative [69], [98], if the number of design parameters is moderate. Moreover, this approach can be employed for other goals, like fast evaluation of neighboring solutions or uncertainty propagation [70].
Regarding this topic, we intend to apply the continuous sensitivity equation method to challenging problems. In particular, in aerodynamics, multiscale turbulence models like LargeEddy Simulation (LES) [123] , DetachedEddy Simulation (DES) [127] or OrganizedEddy Simulation (OES) [43], are more and more employed to analyse the unsteady dynamics of the flows around bluffbodies, because they have the ability to compute the interactions of vortices at different scales, contrary to classical ReynoldsAveraged NavierStokes models. However, their use in design optimization is tedious, due to the long time integration required. In collaboration with turbulence specialists (M. Braza, CNRS  IMFT), we aim at developing numerical methods for effective sensitivity analysis in this context, and apply them to realistic problems, like the optimization of active flow control devices. Note that the use of SEM allows computing cost functional gradients at any time, which permits to construct new gradientbased optimization strategies like instantaneousfeedback method [100] or multiobjective optimization algorithm (see section below).
Multiobjective descent algorithms for multidisciplinary, multipoint, unsteady optimization or robustdesign
n differentiable optimization, multidisciplinary, multipoint, unsteady optimization or robustdesign can all be formulated as multiobjective optimization problems. In this area, we have proposed the MultipleGradient Descent Algorithm (MGDA) to handle all criteria concurrently [71] [72]. Originally, we have stated a principle according which, given a family of local gradients, a descent direction common to all considered objectivefunctions simultaneously is identified, assuming the Paretostationarity condition is not satisfied. When the family is linearlyindependent, we dispose of a direct algorithm. Inversely, when the family is linearlydependent, a quadraticprogramming problem should be solved. Hence, the technical difficulty is mostly conditioned by the number $m$ of objective functions relative to the search space dimension $n$. In this respect, the basic algorithm has recently been revised [73] to handle the case where $m>n$, and even $m\gg n$, and is currently being tested on a testcase of robust design subject to a periodic timedependent NavierStokes flow.
The multipoint situation is very similar and, being of great importance for engineering applications, will be treated at large.
Moreover, we intend to develop and test a new methodology for robust design that will include uncertainty effects. More precisely, we propose to employ MGDA to achieve an effective improvement of all criteria simultaneously, which can be of statistical nature or discrete functional values evaluated in confidence intervals of parameters. Some recent results obtained at ONERA [116] by a stochastic variant of our methodology confirm the viability of the approach. A PhD thesis has also been launched at ONERA/DADS.
Lastly, we note that in situations where gradients are difficult to evaluate, the method can be assisted by a metamodel [135].
Bayesian Optimization algorithms for efficient computation of general equilibria
Bayesian Optimization BO relies on Gaussian processes, which are used as emulators (or surrogates) of the blackbox model outputs based on a small set of model evaluations. Posterior distributions provided by the Gaussian process are used to design acquisition functions that guide sequential search strategies that balance between exploration and exploitation. Such approaches have been transposed to frameworks other than optimization, such as uncertainty quantification. Our aim is to investigate how the BO apparatus can be applied to the search of general game equilibria, and in particular the classical Nash equilibrium (NE). To this end, we propose two complementary acquisition functions, one based on a greedy search approach and one based on the Stepwise Uncertainty Reduction paradigm [80]. Our proposal is designed to tackle derivativefree, expensive models, hence requiring very few model evaluations to converge to the solution.
Decentralized strategies for inverse problems
Most if not all the mathematical formulations of inverse problems (a.k.a. reconstruction, identification, data recovery, non destructive engineering,...) are known to be ill posed in the Hadamard sense. Indeed, in general, inverse problems try to fulfill (minimize) two or more very antagonistic criteria. One classical example is the Tikhonov regularization, trying to find artificially smoothed solutions close to naturally nonsmooth data.
We consider here the theoretical general framework of parameter identification coupled to (missing) data recovery. Our aim is to design, study and implement algorithms derived within a game theoretic framework, which are able to find, with computational efficiency, equilibria between the "identification related players" and the "data recovery players". These two parts are known to pose many challenges, from a theoretical point of view, like the identifiability issue, and from a numerical one, like convergence, stability and robustness problems. These questions are tricky [30] and still completely open for systems like e.g. coupled heat and thermoelastic joint data and material detection.