ACUMES aims at developing a rigorous framework for numerical simulations and optimal control for transportation and buildings, with focus on multi-scale, heterogeneous, unsteady phenomena subject to uncertainty. Starting from established macroscopic Partial Differential Equation (PDE) models, we pursue a set of innovative approaches to include small-scale phenomena, which impact the whole system. Targeting applications contributing to sustainability of urban environments, we couple the resulting models with robust control and optimization techniques.

Modern engineering sciences make an important use of mathematical models and numerical simulations at the conception stage. Effective models and efficient numerical tools allow for optimization before production and to avoid the construction of expensive prototypes or costly post-process adjustments. Most up-to-date modeling techniques aim at helping engineers to increase performances and safety and reduce costs and pollutant emissions of their products. For example, mathematical traffic flow models are used by civil engineers to test new management strategies in order to reduce congestion on the existing road networks and improve crowd evacuation from buildings or other confined spaces without constructing new infrastructures. Similar models are also used in mechanical engineering, in conjunction with concurrent optimization methods, to reduce energy consumption, noise and pollutant emissions of cars, or to increase thermal and structural efficiency of buildings while, in both cases, reducing ecological costs.

Nevertheless, current models and numerical methods exhibit some limitations:

This project focuses on the analysis and optimal control of classical and non-classical evolutionary systems of Partial Differential Equations (PDEs) arising in the modeling and optimization of engineering problems related to safety and sustainability of urban environments, mostly involving fluid-dynamics and structural mechanics. The complexity of the involved dynamical systems is expressed by multi-scale, time-dependent phenomena, possibly subject to uncertainty, which can hardly be tackled using classical approaches, and require the development of unconventional techniques.

The project develops along the following two axes:

These themes are motivated by the specific problems treated in the applications, and represent important and up-to-date issues in engineering sciences. For example, improving the design of transportation means and civil buildings, and the control of traffic flows, would result not only in better performances of the object of the optimization strategy (vehicles, buildings or road networks level of service), but also in enhanced safety and lower energy consumption, contributing to reduce costs and pollutant emissions.

Dynamical models consisting of evolutionary PDEs, mainly of hyperbolic type, appear classically in the applications studied by the previous Project-Team Opale (compressible flows, traffic, cell-dynamics, medicine, etc). Yet, the classical purely macroscopic approach is not able to account for some particular phenomena related to specific interactions occurring at smaller scales. These phenomena can be of greater importance when dealing with particular applications, where the "first order" approximation given by the purely macroscopic approach reveals to be inadequate. We refer for example to self-organizing phenomena observed in pedestrian flows , or to the dynamics of turbulent flows for which large scale / small scale vortical structures interfere .

Nevertheless, macroscopic models offer well known advantages, namely a sound analytical framework, fast numerical schemes, the presence of a low number of parameters to be calibrated, and efficient optimization procedures. Therefore, we are convinced of the interest of keeping this point of view as dominant, while completing the models with information on the dynamics at the small scale / microscopic level. This can be achieved through several techniques, like hybrid models, homogenization, mean field games. In this project, we will focus on the aspects detailed below.

The development of adapted and efficient numerical schemes is a mandatory completion, and sometimes ingredient, of all the approaches listed below. The numerical schemes developed by the team are based on finite volumes or finite elements techniques, and constitute an important tool in the study of the considered models, providing a necessary step towards the design and implementation of the corresponding optimization algorithms, see Section .

Modeling of complex problems with a dominant macroscopic point of view often requires couplings with small scale descriptions. Accounting for systems heterogeneity or different degrees of accuracy usually leads to coupled PDE-ODE systems.

In the case of heterogeneous problems the coupling is "intrinsic", i.e. the two models evolve together and mutually affect each-other. For example, accounting for the impact of a large and slow vehicle (like a bus or a truck) on traffic flow leads to a strongly coupled system consisting of a (system of) conservation law(s) coupled with an ODE describing the bus trajectory, which acts as a moving bottleneck. The coupling is realized through a local unilateral moving constraint on the flow at the bus location, see for an existence result and , for numerical schemes.

If the coupling is intended to offer higher degree of accuracy at some locations, a macroscopic and a microscopic model are connected through an artificial boundary, and exchange information across it through suitable boundary conditions. See , for some applications in traffic flow modelling, and , , for applications to cell dynamics.

The corresponding numerical schemes are usually based on classical finite volume or finite element methods for the PDE, and Euler or Runge-Kutta schemes for the ODE, coupled in order to take into account the interaction fronts. In particular, the dynamics of the coupling boundaries require an accurate handling capturing the possible presence of non-classical shocks and preventing diffusion, which could produce wrong solutions, see for example , .

We plan to pursue our activity in this framework, also extending the above mentioned approaches to problems in two or higher space dimensions, to cover applications to crowd dynamics or fluid-structure interaction.

Rigorous derivation of macroscopic models from microscopic ones offers a sound basis for the proposed modeling approach, and can provide alternative numerical schemes, see for example , for the derivation of Lighthill-Whitham-Richards , traffic flow model from Follow-the-Leader and for results on crowd motion models (see also ). To tackle this aspect, we will rely mainly on two (interconnected) concepts: measure-valued solutions and mean-field limits.

The notion of measure-valued solutions for conservation laws was first introduced by DiPerna , and extensively used since then
to prove convergence of approximate solutions and deduce existence results, see for example and references therein.
Measure-valued functions have been recently advocated as the appropriate notion of solution
to tackle problems for which
analytical results (such as existence and uniqueness of weak solutions in distributional sense) and numerical convergence are missing , .
We refer, for example,
to the notion of solution for non-hyperbolic systems , for which no general theoretical result is available at present,
and to the convergence of finite volume schemes for systems of hyperbolic conservation laws in several space dimensions, see .

In this framework, we plan to investigate and make use of measure-based PDE models for vehicular and pedestrian traffic flows.
Indeed, a modeling approach based on (multi-scale) time-evolving measures (expressing the agents probability distribution in space)
has been recently introduced (see the monograph ),
and proved to be successful for studying emerging self-organised flow patterns .
The theoretical measure framework proves to be also relevant in addressing micro-macro limiting procedures
of mean field type , where one lets the number of agents going to infinity, while keeping the
total mass constant. In this case, one must prove that the empirical measure, corresponding to the sum of Dirac measures concentrated at the agents positions, converges to a measure-valued solution of the corresponding
macroscopic evolution equation.
We recall that a key ingredient in this approach is the use of the Wasserstein distances , .
Indeed, as observed in , the usual

This procedure can potentially be extended to more complex configurations, like for example road networks or different classes of interacting agents, or to other application domains, like cell-dynamics.

Another powerful tool we shall consider to deal with micro-macro limits is the so-called Mean Field Games (MFG)
technique (see the seminal paper ).
This approach has been recently applied to some of the systems studied by the team, such as traffic flow and cell dynamics.
In the context of crowd dynamics, including the case of several populations with different targets, the mean field game approach has been adopted in , , , , under the assumption
that the individual behavior evolves according to a stochastic process, which gives rise to parabolic equations greatly simplifying the analysis of the system.
Besides, a deterministic context is studied in , which considers a non-local velocity field.
For cell dynamics, in order to take into account the fast processes that occur in the migration-related machinery, a framework such the one developed in to handle games "where agents evolve their strategies according to the best-reply scheme on a much faster time scale than their social configuration variables" may turn out to be suitable. An alternative framework to MFG is also considered. This framework is based on the formulation of -Nash- games constrained by the Fokker-Planck (FP, ) partial differential equations that govern the time evolution of the probability density functions -PDF- of stochastic systems and on objectives that may require to follow a given PDF trajectory or to minimize an expectation functional.

Non-local interactions can be described through macroscopic models based on integro-differential equations. Systems of the type

where

General analytical results on non-local conservation laws, proving existence and eventually uniqueness of solutions of the Cauchy problem for ,
can be found in
for scalar equations in one space dimension (

Relying on these encouraging results, we aim to push a step further the analytical and numerical study of non-local models of type , in particular concerning well-posedness of initial - regularity of solutions, boundary value problems and high-order numerical schemes.

Different sources of uncertainty can be identified in PDE models, related to the fact that the problem of interest is not perfectly known. At first, initial and boundary condition values can be uncertain. For instance, in traffic flows, the time-dependent value of inlet and outlet fluxes, as well as the initial distribution of vehicles density, are not perfectly determined . In aerodynamics, inflow conditions like velocity modulus and direction, are subject to fluctuations , . For some engineering problems, the geometry of the boundary can also be uncertain, due to structural deformation, mechanical wear or disregard of some details . Another source of uncertainty is related to the value of some parameters in the PDE models. This is typically the case of parameters in turbulence models in fluid mechanics, which have been calibrated according to some reference flows but are not universal , , or in traffic flow models, which may depend on the type of road, weather conditions, or even the country of interest (due to differences in driving rules and conductors behaviour). This leads to equations with flux functions depending on random parameters , , for which the mean and the variance of the solutions can be computed using different techniques. Indeed, uncertainty quantification for systems governed by PDEs has become a very active research topic in the last years. Most approaches are embedded in a probabilistic framework and aim at quantifying statistical moments of the PDE solutions, under the assumption that the characteristics of uncertain parameters are known. Note that classical Monte-Carlo approaches exhibit low convergence rate and consequently accurate simulations require huge computational times. In this respect, some enhanced algorithms have been proposed, for example in the balance law framework . Different approaches propose to modify the PDE solvers to account for this probabilistic context, for instance by defining the non-deterministic part of the solution on an orthogonal basis (Polynomial Chaos decomposition) and using a Galerkin projection , , , or an entropy closure method , or by discretizing the probability space and extending the numerical schemes to the stochastic components . Alternatively, some other approaches maintain a fully deterministic PDE resolution, but approximate the solution in the vicinity of the reference parameter values by Taylor series expansions based on first- or second-order sensitivities , , .

Our objective regarding this topic is twofold. In a pure modeling perspective, we aim at including uncertainty quantification in models calibration and validation for predictive use. In this case, the choice of the techniques will depend on the specific problem considered . Besides, we plan to extend previous works on sensitivity analysis , to more complex and more demanding problems. In particular, high-order Taylor expansions of the solution (greater than two) will be considered in the framework of the Sensitivity Equation Method (SEM) for unsteady aerodynamic applications, to improve the accuracy of mean and variance estimations. A second targeted topic in this context is the study of the uncertainty related to turbulence closure parameters, in the sequel of . We aim at exploring the capability of the SEM approach to detect a change of flow topology, in case of detached flows. Our ambition is to contribute to the emergence of a new generation of simulation tools, which will provide solution densities rather than values, to tackle real-life uncertain problems. This task will also include a reflection about numerical schemes used to solve PDE systems, in the perspective of constructing a unified numerical framework able to account for exact geometries (isogeometric methods), uncertainty propagation and sensitivity analysis w.r.t. control parameters.

The non-classical models described above are developed in the perspective of design improvement for real-life applications. Therefore, control and optimization algorithms are also developed in conjunction with these models. The focus here is on the methodological development and analysis of optimization algorithms for PDE systems in general, keeping in mind the application domains in the way the problems are mathematically formulated.

Adjoint methods (achieved at continuous or discrete level) are now commonly used in industry for steady PDE problems. Our recent developments have shown that the (discrete) adjoint method can be efficiently applied to cost gradient computations for time-evolving traffic flow on networks, thanks to the special structure of the associated linear systems and the underlying one dimensionality of the problem. However, this strategy is questionable for more complex (e.g. 2D/3D) unsteady problems, because it requires sophisticated and time-consuming check-pointing and/or re-computing strategies , for the backward time integration of the adjoint variables. The sensitivity equation method (SEM) offers a promising alternative , , if the number of design parameters is moderate. Moreover, this approach can be employed for other goals, like fast evaluation of neighboring solutions or uncertainty propagation .

Regarding this topic, we intend to apply the continuous sensitivity equation method to challenging problems. In particular, in aerodynamics, multi-scale turbulence models like Large-Eddy Simulation (LES) , Detached-Eddy Simulation (DES) or Organized-Eddy Simulation (OES) , are more and more employed to analyse the unsteady dynamics of the flows around bluff-bodies, because they have the ability to compute the interactions of vortices at different scales, contrary to classical Reynolds-Averaged Navier-Stokes models. However, their use in design optimization is tedious, due to the long time integration required. In collaboration with turbulence specialists (M. Braza, CNRS - IMFT), we aim at developing numerical methods for effective sensitivity analysis in this context, and apply them to realistic problems, like the optimization of active flow control devices. Note that the use of SEM allows computing cost functional gradients at any time, which permits to construct new gradient-based optimization strategies like instantaneous-feedback method or multiobjective optimization algorithm (see section below).

A major difficulty in shape optimization is related to the multiplicity of geometrical representations handled during the design process. From high-order Computer-Aided Design (CAD) objects to discrete mesh-based descriptions, several geometrical transformations have to be performed, that considerably impact the accuracy, the robustness and the complexity of the design loop. This is even more critical when multiphysics applications are targeted, including moving bodies.

To overcome this difficulty, we intend to investigate isogeometric analysis methods, which propose to use the same CAD representations for the computational domain and the physical solutions yielding geometrically exact simulations. In particular, hyperbolic systems and compressible aerodynamics are targeted.

In differentiable optimization, multi-disciplinary, multi-point, unsteady optimization or robust-design can all be formulated as
multi-objective optimization problems. In this area, we have proposed the Multiple-Gradient Descent Algorithm (MGDA)
to handle all criteria concurrently .
Originally, we have stated a principle according which,
given a family of local gradients, a descent direction common to all considered
objective-functions simultaneously is identified, assuming the Pareto-stationarity condition is not satisfied.
When the family is linearly-independent, we dispose of a direct algorithm.
Inversely, when the family is linearly-dependent, a quadratic-programming problem should be solved.
Hence, the technical difficulty is mostly conditioned by the number

The multi-point situation is very similar and, being of great importance for engineering applications, will be treated at large.

Moreover, we intend to develop and test a new methodology for robust design that will include uncertainty effects. More precisely, we propose to employ MGDA to achieve an effective improvement of all criteria simultaneously, which can be of statistical nature or discrete functional values evaluated in confidence intervals of parameters. Some recent results obtained at ONERA by a stochastic variant of our methodology confirm the viability of the approach. A PhD thesis has also been launched at ONERA/DADS.

Lastly, we note that in situations where gradients are difficult to evaluate, the method can be assisted by a meta-model .

Bayesian Optimization (BO) relies on Gaussian processes, which are used as emulators (or surrogates) of the black-box model outputs based on a small set of model evaluations. Posterior distributions provided by the Gaussian process are used to design acquisition functions that guide sequential search strategies that balance between exploration and exploitation. Such approaches have been transposed to frameworks other than optimization, such as uncertainty quantification. Our aim is to investigate how the BO apparatus can be applied to the search of general game equilibria, and in particular the classical Nash equilibrium (NE). To this end, we propose two complementary acquisition functions, one based on a greedy search approach and one based on the Stepwise Uncertainty Reduction paradigm . Our proposal is designed to tackle derivative-free, expensive models, hence requiring very few model evaluations to converge to the solution.

Most if not all the mathematical formulations of inverse problems (a.k.a. reconstruction, identification, data recovery, non destructive engineering,...) are known to be ill posed in the Hadamard sense. Indeed, in general, inverse problems try to fulfill (minimize) two or more very antagonistic criteria. One classical example is the Tikhonov regularization, trying to find artificially smoothed solutions close to naturally non-smooth data.

We consider here the theoretical general framework of parameter identification coupled to (missing) data recovery. Our aim is to design, study and implement algorithms derived within a game theoretic framework, which are able to find, with computational efficiency, equilibria between the "identification related players" and the "data recovery players". These two parts are known to pose many challenges, from a theoretical point of view, like the identifiability issue, and from a numerical one, like convergence, stability and robustness problems. These questions are tricky and still completely open for systems like e.g. coupled heat and thermoelastic joint data and material detection.

The reduction of CO2 emissions represents a great challenge for the automotive and aeronautic industries, which committed respectively a decrease of 20% for 2020 and 75% for 2050. This goal will not be reachable, unless a significant improvement of the aerodynamic performance of cars and aircrafts is achieved (e.g. aerodynamic resistance represents 70% of energy losses for cars above 90 km/h). Since vehicle design cannot be significantly modified, due to marketing or structural reasons, active flow control technologies are one of the most promising approaches to improve aerodynamic performance. This consists in introducing micro-devices, like pulsating jets or vibrating membranes, that can modify vortices generated by vehicles. Thanks to flow non-linearities, a small energy expense for actuation can significantly reduce energy losses. The efficiency of this approach has been demonstrated, experimentally as well as numerically, for simple configurations .

However, the lack of efficient and flexible numerical tools, that allow to simulate and optimize a large number of such devices on realistic configurations, is still a bottleneck for the emergence of this technology in industry. The main issue is the necessity of using high-order schemes and complex models to simulate actuated flows, accounting for phenomena occurring at different scales. In this context, we intend to contribute to the following research axes:

Intelligent Transportation Systems (ITS) is nowadays a booming sector, where the contribution of mathematical modeling and optimization is widely recognized. In this perspective, traffic flow models are a commonly cited example of "complex systems", in which individual behavior and self-organization phenomena must be taken into account to obtain a realistic description of the observed macroscopic dynamics . Further improvements require more advanced models, keeping into better account interactions at the microscopic scale, and adapted control techniques, see and references therein.

In particular, we will focus on the following aspects:

Atherosclerosis is a chronic inflammatory disease that affects the entire arterial network and especially the coronary arteries. It is an accumulation of lipids over the arterial surface due to a dysfunction of this latter. The objective of clinical intervention, in this case, is to establish a revascularization using different angioplasty techniques, among which the implantation of stents is the most widespread.
This intervention involves introducing a stent into the damaged portion in order to allow the blood to circulate in a normal way over all the vessels.
Revascularization is based on the principle of remedying ischemia, which is a decrease or an interruption of the supply of oxygen to the various organs. This anomaly is attenuated by the presence of several lesions (multivessel disease patients), which can lead to several complications. The key of a good medical intervention is the fact of establishing a good diagnosis, in order to decide which lesion requires to be treated. In the diagnosis phase, the clinician uses several techniques, among which angiography is the most popular.
Angiography is an X-ray technique to show the inside (the lumen) of blood vessels, in order to identify vessel narrowing: stenosis. Despite its widespread use, angiography is often imperfect in determining the physiological significance of coronary stenosis. If the problem remains simple for non significant lesions (

The technique of the Fractional Flow Reserve (FFR) has derived from the initial coronary physical approaches decades ago. Since then, many studies have demonstrated its effectiveness in improving the patients prognosis, by applying the appropriate approach. Its contribution in the reduction of mortality was statistically proved by the FAME (Fractional Flow Reserve Versus Angiography for Multivessel Evaluation) study . It is established that the FFR can be easily measured during coronary angiography by calculating the ratio of distal coronary pressure

Obviously, from an interventional point of view, the FFR is binding since it is invasive. It should also be noted that this technique induces additional costs, which are not covered by insurances in several countries. For these reasons, it is used only in less than 10% of the cases.

In this perspective, a new virtual version of the FFR, entitled VFFR, has emerged as an attractive and non-invasive alternative to standard FFR, see , . VFFR is based on computational modeling, mainly fluid and fluid-structural dynamics. However, there are key scientific, logistic and commercial challenges that need to be overcome before VFFR can be translated into routine clinical practice.

While most of the studies related to VFFR use Navier-Stokes models, we focus on the non-newtonian case, starting with a generalized fluid flow approach. These models are more relevant for the coronary arteries, and we expect that the computation of the FFR should then be more accurate. We are also leading numerical studies to assess the impact (on the FFR) of the interaction of the physical devices (catheter, optical captors, spheroids) with the blood flow.

Besides the above mentioned axes, which constitute the project's identity, the methodological tools described in Section have a wider range of application. We currently carry on also the following research actions, in collaboration with external partners.

Game strategies for thermoelastography.
Thermoelastography is an innovative non-invasive control technology, which has numerous advantages over other techniques, notably in medical imaging . Indeed,
it is well known that most pathological changes are associated with changes in tissue stiffness, while remaining isoechoic, and hence difficult to detect by ultrasound techniques.
Based on elastic waves and heat flux reconstruction, thermoelastography shows no destructive or aggressive medical sequel, unlike X-ray and comparables techniques, making it a potentially prominent choice for patients.

Physical principles of thermoelastography originally rely on dynamical structural responses of tissues, but as a first approach, we only consider static responses of linear elastic structures.

The mathematical formulation of the thermoelasticity reconstruction is based on data completion and material identification, making it a harsh ill posed inverse problem. In previous works , , we have demonstrated that Nash game approaches are efficient to tackle ill-posedness. We intend to extend the results obtained for Laplace equations in , and the algorithms developed in Section to the following problems (of increasing difficulty):

- Simultaneous data and parameter recovery in linear elasticity, using the so-called Kohn and Vogelius functional (ongoing work, some promising results obtained).

- Data recovery in coupled heat-thermoelasticity systems.

- Data recovery in linear thermoelasticity under stochastic heat flux, where the imposed flux is stochastic.

- Data recovery in coupled heat-thermoelasticity systems under stochastic heat flux, formulated as an incomplete information Nash game.

- Application to robust identification of cracks.

Constraint elimination in Quasi-Newton methods.
In single-objective differentiable optimization, Newton's method requires the specification of both gradient and Hessian.
As a result, the convergence is quadratic, and Newton's method is often considered as the target reference.
However, in applications to distributed systems, the functions to be minimized are usually “functionals”, which
depend on
the optimization variables by the solution of an often complex set of PDE's,
through a chain of computational procedures.
Hence,
the exact calculation of the full Hessian becomes a complex and costly computational endeavor.

This has fostered the development of
quasi-Newton's methods that mimic Newton's method but use only the gradient, the Hessian being iteratively
constructed by successive approximations inside the algorithm itself. Among such methods,
the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is well-known and commonly employed.
In this method, the Hessian is corrected at each new iteration by rank-one matrices defined from several evaluations
of the gradient only. The BFGS method has "super-linear convergence".

For constrained problems,
certain authors have developed so-called
Riemannian BFGS, e.g. , that have the desirable convergence property in
constrained problems. However, in this approach, the constraints are assumed to be known formally,
by explicit expressions.

In collaboration with ONERA-Meudon, we are exploring the possibility of representing constraints, in successive iterations, through local approximations of the constraint surfaces, splitting the design space locally into tangent and normal sub-spaces, and eliminating the normal coordinates through a linearization, or more generally a finite expansion, and applying the BFGS method through dependencies on the coordinates in the tangent subspace only. Preliminary experiments on the difficult Rosenbrock test-case, although in low dimensions, demonstrate the feasibility of this approach. On-going research is on theorizing this method, and testing cases of higher dimensions.

Multi-objective optimization for nanotechnologies.
Our team takes part in a larger collaboration with CEA/LETI (Grenoble),
initiated by the Inria Project-Team Nachos (now Atlantis), and related to the Maxwell equations.
Our component in this activity relates to the optimization of nanophotonic devices,
in particular with respect to the control of thermal loads.
We have first identified a gradation of representative test-cases of increasing complexity:

- infrared micro-source;

- micro-photoacoustic cell;

- nanophotonic device.

These cases involve from a few geometric parameters to be optimized to a functional minimization subject to a finite-element solution involving a large number of dof's. CEA disposes of such codes, but considering the computational cost of the objective functions in the complex cases, the first part of our study is focused on the construction and validation of meta-models, typically of RBF-type. Multi-objective optimization will be carried out subsequently by MGDA, and possibly Nash games.

The research conducted with the startup Mycophyto aims at reducing the use of chemical fertilisers and phytopharmaceutical products by developing natural biostimulants (mycorrhyzal fungi). It started with the arrival of Khadija Musayeva in October 2020.

Acumes's research activity in traffic modeling and control is intended to improve road network efficiency, thus reducing energy consumption and pollutant emission.

From medical viewpoint, virtual fractional flow reserve vFFR is a promising technique to support clinicians in cardiostenting with cheap social costs compared to the analogic commercial solutions. Acumes has contributed to improve the involved computational apparatus (nonlinear fluid mechanics with ad hoc boundary conditions).

The research activities related to isogeometric analysis aim at facilitating the use of shape optimization methods in engineering, yielding a gain of efficiency, for instance in transportation industry (cars, aircrafts) or energy industry (air conditioning, turbines).

Let us desribe new/updated software.

Traffic control by Connected and Automated Vehicles.

We present a general multi-scale approach for modeling the interaction of controlled and automated vehicles (CAVs) with the surrounding traffic flow. The model consists of a scalar conservation law for the bulk traffic, coupled with ordinary differential equations describing the possibly interacting AV trajectories. The coupling is realized through flux constraints at the moving bottleneck positions, inducing the formation of non-classical jump discontinuities in the traffic density. In turn, CAVs are forced to adapt their speed to the downstream traffic average velocity in congested situations. We analyze the model solutions in a Riemann-type setting, and propose an adapted finite volume scheme to compute approximate solutions for general initial data. The work paves the way to the study of general optimal control strategies for CAV velocities, aiming at improving the overall traffic flow by reducing congestion phenomena and the associated externalities. Controlling CAV desired speeds allows to act on the system to minimize any traffic density dependent cost function. More precisely, we apply Model Predictive Control (MPC) to reduce fuel consumption in congested situations.

This work was partly achieved during of C. Daini's internship, see .

Traffic flow model calibration by statistical approaches.

In the framework of A. Würth's PhD thesis, we employ a Bayesian approach including a bias term to estimate first and second order model parameters, based on two traffic data sets: a set of loop detector data located on the A50 highway between Marseille and Aubagne provided by DirMED, and publicly available data from the Minnesota Department of transportation ().
In , we propose a Bayesian approach for parameter uncertainty quantification in macroscopic traffic flow models from cross-sectional data. A bias term is introduced and modeled as a Gaussian process to account for the traffic flow models limitations. We validate the results comparing the error metrics of both first and second order models, showing that second order models globally perform better in reconstructing traffic quantities of interest.

We also account for real data information to design improved models to better account for observations , .

The co-existence of different geometrical representations in the design loop (CAD-based and mesh-based) is a real bottleneck for the application of design optimization procedures in industry, yielding a major waste of human time to convert geometrical data. Isogeometric analysis methods, which consists in using CAD bases like NURBS in a Finite-Element framework, were proposed a decade ago to facilitate interactions between geometry and simulation domains.

We investigate the extension of such methods to Discontinuous Galerkin (DG) formulations, which are better suited to hyperbolic or convection-dominated problems. Specifically, we develop a DG method for compressible Euler and Navier-Stokes equations, based on rational parametric elements, that preserves exactly the geometry of boundaries defined by NURBS, while the same rational approximation space is adopted for the solution . The following research axes are considered in this context:

Arbitrary Eulerian-Lagrangian formulation for high-order meshes

To enable the simulation of flows around moving or deforming bodies, an Arbitrary Eulerian-Lagrangian (ALE) formulation is proposed in the context of the isogeometric DG method . It relies on a NURBS-based grid velocity field, integrated along time over moving NURBS elements. The gain of using exact-geometry representations is clearly quantified, in terms of accuracy and computational efficiency . The approach has been applied to the simulation of morphing airfoils .

Geometrically exact sliding interfaces

In the context of rotating machines (compressors, turbines, etc), computations are achieved using a rotating inner grid interfaced to an outer fixed grid. This coupling is cumbersome using classical piecewise-linear grids due to a lack of common geometrical interface. Thus, we have developed a method based on a geometrically exact sliding interface using NURBS elements, ensuring a fully conservative scheme .

Isogeometric shape optimization

We develop an optimization procedure with shape sensitivity analysis, entirely based on NURBS representations . The mesh, the shape to be optimized, as well as the flow solutions are represented by NURBS, which avoid any geometrical conversion and allows to exploit NURBS properties regarding regularity or hierarchy. The approach has also been employed in the framework of Bayesian optimization for airfoil design .

The adjoint equation method, classically employed in design optimization to compute functional gradients, is not well suited to complex unsteady problems, because of the necessity to solve it backward in time. Therefore, we investigate the use of the sensitivity equation method, which is integrated forward in time, in the context of compressible flows.

When shape parameters are considered, the evaluation of flow sensitivities is more difficult, because equations include an additional term, involving flow gradient, due to the fact that the parameter affects the boundary condition location. To overcome this difficulty, we propose to solve sensitivity equations using an isogeometric Discontinuous Galerkin (DG) method, which allows to estimate accurately flow gradients at boundary and consider boundary control points as shape parameters. First results obtained for 2D compressible Euler equations exhibit a sub-optimal convergence rate, as expected, but a better accuracy with respect to a classical DG method .

Multi-fidelity Bayesian optimization

The objective of multi-fidelity optimization strategies is to account for a set of models of different accuracies and costs to accelerate the optimization procedure. In the context of Bayesian optimization, we develop such a multi-fidelity approach based on non-nested evaluations: each time a new evaluation is required, the algorithm selects a new design point associated to a fidelity level to maximize the expected improvement on the finest modeling level. The proposed approach is applied to the fluid-structure optimization of a sailing boat, which is described by five modeling levels. A significant acceleration of the optimization procedure is reported, without loss of accuracy .

Bayesian optimization of nano-photonic devices

In collaboration with Atlantis Project-Team, we consider the optimization of optical meta-surface devices, which are able to alter light properties by operating at nano-scale. In the context of Maxwell equations, modified to account for nano-scale phenomena, the geometrical properties of materials are optimized to achieve a desired electromagnetic wave response, such as change of polarization, intensity or direction. This task is especially challenging due to the computational cost related to the 3D time-accurate simulations, the difficulty to handle the different geometrical scales in optimization and the presence of uncertainties.

First studies achieved using Bayesian optimization algorithms, demonstrate the potentiality of the proposed approach . In further studies , , , we tackle robust optimization in the presence of manufacturing uncertainties and a multi-objective approach for improving RGB lenses.

Bayesian optimization of micro-swimmers

Massively parallel Bayesian optimization

Motivated by a large scale multi-objective optimization problem for which thousands of evaluations can be conducted in parallel , we develop an efficient approach to tackle this issue in .

CityCOVID is a detailed agent-based model that represents the behaviors and social interactions of 2.7 million residents of Chicago as they move between and colocate in 1.2 million distinct places, including households, schools, workplaces, and hospitals, as determined by individual hourly activity schedules and dynamic behaviors such as isolating because of symptom onset. Disease progression dynamics incorporated within each agent track transitions between possible COVID-19 disease states, based on heterogeneous agent attributes, exposure through colocation, and effects of protective behaviors of individuals on viral transmissibility. Throughout the COVID-19 epidemic, CityCOVID model outputs have been provided to city, county, and state stakeholders in response to evolving decision-making priorities, while incorporating emerging information on SARS-CoV-2 epidemiology. Here we demonstrate our efforts in integrating our high-performance epidemiological simulation model with large-scale machine learning to develop a generalizable, flexible, and performant analytical platform for planning and crisis response.

One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by building a surrogate model of the black-box that can be used to select the designs to evaluate efficiently via an infill criterion. Still, with higher levels of parallelization becoming available, the strategies that work for a few tens of parallel evaluations become limiting, in particular due to the complexity of selecting more evaluations. It is even more crucial when the black-box is noisy, necessitating more evaluations as well as repeating experiments. Here we propose a scalable strategy that can keep up with massive batching natively, focused on the exploration/exploitation trade-off and a portfolio allocation. We compare the approach with related methods on deterministic and noisy functions, for mono and multiobjective optimization tasks. These experiments show similar or better performance than existing methods, while being orders of magnitude faster.

A game theoretic perspective on Bayesian multi-objective optimization

Besides Bayesian optimization as above, Gaussian processes are useful for a variety of other related tasks. Here we first present a tutorial on the subject of modeling with input dependent noise with an implementation in the hetGP R package. Then the estimation of level-set for noisy simulators with complex input noise is studied, before treating sequential design for efficient dimension reduction. This later is one option among others for high-dimensional GP modeling, for which we review the state of the art.

Heteroskedastic Gaussian process modeling and sequential design

An increasing number of time-consuming simulators exhibit a complex noise structure that depends on the inputs. For conducting studies with limited budgets of evaluations, new surrogate methods are required in order to simultaneously model the mean and variance fields. To this end, in we present the hetGP package implementing many recent advances in Gaussian process modeling with input-dependent noise. First, we describe a simple, yet efficient, joint modeling framework that relies on replication for both speed and accuracy. Then we tackle the issue of data acquisition leveraging replication and exploration in a sequential manner for various goals, such as for obtaining a globally accurate model, for optimization, or for contour finding. Reproducible illustrations are provided throughout.

Evaluating Gaussian Process metamodels and sequential designs for noisy level set estimation

We consider the problem of learning the level set for which a noisy black-box function exceeds a given threshold. To efficiently reconstruct the level set, we investigate Gaussian process (GP) metamodels. Our focus in is on strongly stochastic samplers, in particular with heavy-tailed simulation noise and low signal-to-noise ratio. To guard against noise misspecification, we assess the performance of three variants: (i) GPs with Student-t observations; (ii) Student-t processes (TPs); and (iii) classification GPs modeling the sign of the response. In conjunction with these metamodels, we analyze several acquisition functions for guiding the sequential experimental designs, extending existing stepwise uncertainty reduction criteria to the stochastic contour-finding context. This also motivates our development of (approximate) updating formulas to efficiently compute such acquisition functions. Our schemes are benchmarked by using a variety of synthetic experiments in 1–6 dimensions. We also consider an application of level set estimation for determining the optimal exercise policy of Bermudan options in finance.

Sequential learning of active subspace

Sensitivity prewarping for local surrogate modeling

In the continual effort to improve product quality and decrease operations costs, computational modeling is increasingly being deployed to determine feasibility of product designs or configurations. Surrogate modeling of these computer experiments via local models, which induce sparsity by only considering short range interactions, can tackle huge analyses of complicated input-output relationships. However, narrowing focus to local scale means that global trends must be re-learned over and over again. In , we propose a framework for incorporating information from a global sensitivity analysis into the surrogate model as an input rotation and rescaling preprocessing step. We discuss the relationship between several sensitivity analysis methods based on kernel regression before describing how they give rise to a transformation of the input variables. Specifically, we perform an input warping such that the "warped simulator" is equally sensitive to all input directions, freeing local models to focus on local dynamics. Numerical experiments on observational data and benchmark test functions, including a high-dimensional computer simulator from the automotive industry, provide empirical validation.

A survey on high-dimensional Gaussian process modeling with application to Bayesian optimization

This work concerns the development of black-box optimization methods based on single-step deep reinforcement learning (DRL) and their conceptual similarity to evolution strategy (ES) techniques . The connection of policy-based optimization (PBO) to evolutionary strategies (especially covariance matrix adaptation evolutionary strategy) is discussed. Relevance is assessed by benchmarking PBO against classical ES techniques on analytic functions minimization problems, and by optimizing various parametric control laws intended for the Lorenz attractor. This contribution definitely establishes PBO as a valid, versatile black-box optimization technique, and opens the way to multiple future improvements building on the inherent flexibility of the neural networks approach.

Our long-term aim is to contribute to Multidisciplinary Optimization (MDO), although in this area, we have not yet been able to address problems governed by one or more PDE systems. In the perspective of this ambitious target, we observe that calculating a Pareto front associated with more than two cost functions is a complex simulation enterprise, seldomly accomplished in size engineering problems . Analyzing the result in three or more dimensions is not a simple task either. Additionally, in many physical situations, the computational challenge of directly accounting for three or more criteria may be superfluous from the start: the performance of a complex system can often be evaluated first by a reduced set of criteria (say two or three), and other criteria be introduced in a second step only, as an adaptive refinement. Our method addresses precisely this problematics.

A numerical method has been developed to conduct multi-objective
optimization in two phases. In the first phase, the primary cost
functions, considered of preponderant importance, are minimized under
constraints by some effective optimizer of appropriate type
(gradient-based, genetic, or bayesian). From a selected
Pareto-optimal point,
a path parametrized by a new variable,

The formulation is “compatible”
with the first phase of optimization, in the sense that the selected
initial point is indeed the Nash equilibrium point achieved by the
formulation for (i) the secondary
cost functions diminish linearly with (ii) the Pareto-optimality condition of the primary cost
functions is degraded by a term

A special chapter of the software platform has been developed with the assistance of the Inria Service for Software Development and Experimentation to facilitate the application of this strategy by external users. (See Section Software).

The method was successfully applied in two problems of technical relevance:

This promising method is currently being applied to another aircraft performance optimization in cooperation with Onera Toulouse (N. Bartoli, Ch. David, S. Defoort). In this case study, we are using the open-source Fast-OAD software developed by Onera to evaluate the performance (two masses at take-off, and the ascent time) and our platform to accomplish the prioritized optimization, aiming at documenting a reproducible case study, and vitalizing a technical cooperation with Onera.

We extend in two directions our results published in to tackle ill posed Cauchy-Stokes inverse problems as Nash games. First, we consider the problem of detecting unknown pointwise sources in a stationary viscous fluid, using partial boundary measurements. The considered fluid obeys a steady Stokes regime, the boundary measurements are a single compatible pair of Dirichlet and Neumann data, available only on a partial accessible part of the whole boundary. This inverse source identification for the Cauchy-Stokes problem is ill-posed for both the sources and missing data reconstructions, and designing stable and efficient algorithms is challenging. We reformulate the problem as a three-player Nash game. Thanks to a source identifiability result derived for the Cauchy-Stokes problem, it is enough to set up two Stokes BVP, then use them as state equations. The Nash game is then set between 3 players, the first two targeting the data completion while the third one targets the detection of the number, location and magnitude of the unknown sources. We provided the third player with the location and magnitude parameters as strategy, with a cost functional of Kohn-Vogelius type. In particular, the location is obtained through the computation of the topological sensitivity of the latter function. We propose an original algorithm, which we implemented using Freefem++. We present 2D numerical experiments for many different test-cases.The obtained results corroborate the efficiency of our 3-player Nash game approach to solve parameter or shape identification for Cauchy-Stokes problems .

The second direction is dedicated to the solution of the data completion problem for non-linear flows. We consider two kinds of non linearities leading to either a non Newtonian Stokes flow or to Navier-Stokes equations. Our recent numerical results show that it is possible to perform a one-shot approach using Nash games : players exchange their respective state information and solve linear systems. At convergence to a Nash equilibrium, the states converge to the solution of the non linear systems. To the best of our knowledge, this is the first time such an approach is applied to solve Inverse problems for nonlinear systems , .

We have introduced and analyzed a non-linear Cranck-Nicolson Finite Difference scheme, dedicated to the numerical solution of the Fisher and KPP equation, a non-linear parabolic reaction-diffusion equation we have formerly used to model wound closure in the absence and presence of activators or inhibitors , . For the present numerical analysis, we take into consideration mixed boundary conditions. We first have established that the non-linear discretized system is well posed, and proved both consistency and, using a Energy functional, its stability. We also proved its second order convergence in the ad hoc Sobolev norm. For each time step, the non-linear -scalar- problem was solved by means of an exact Newton method.

Numerical investigations corroborate the theoretical error estimates, and convergence order. A challenging perspective is to analyse the numerical schemes dedicated to non constant diffusion-proliferation parameters.

Moving from the above well established PDE equations used to model cell dynamics, we develop an hybrid model coupling agent-based modeling to PDEs : an ABM-PDEs multi-scale tumor growth model is developed in , where micro and macro scales communicate through a hybrid formulation: cells as microscopic agents, with ABM handling complex cell-cell interactions, and nutrient concentration as a macroscopic field, which evolution is governed by reaction-diffusion PDEs.

Project OPERA (2019-2021): Adaptive planar optics

This project is composed of Inria teams ATLANTIS, ACUMES and HIEPACS, CNRS CRHEA lab. and company NAPA. Its objective is the characterization and design of new meta-surfaces for optics ().