TONUS started in January 2014. It is a team of the Inria Nancy-Grand Est center. It is located in the mathematics institute (IRMA) of the University of Strasbourg.

The International Thermonuclear Experimental Reactor (ITER) is a large-scale scientific experiment that aims to demonstrate that it is possible to produce energy from fusion, by confining a very hot hydrogen plasma inside a toroidal chamber, called tokamak. In addition to physics and technology research, tokamak design also requires mathematical modelling and numerical simulations on supercomputers.

The objective of the TONUS project is to deal with such mathematical and computing
issues. We are mainly interested in kinetic, gyrokinetic and fluid simulations of
tokamak plasmas. In the TONUS project-team we are working on the development of
new numerical methods devoted to such simulations. We investigate several
classical plasma models, study new reduced models and new numerical
schemes adapted to these models.
We implement our methods in two software projects:
Selalib

We have strong relations with the CEA-IRFM team and participate in the development of their gyrokinetic simulation software GYSELA. We are involved in two Inria Project Labs, respectively devoted to tokamak mathematical modelling and high performance computing. The numerical tools developed from plasma physics can also be applied in other contexts. For instance, we collaborate with a small company in Strasbourg specialized in numerical software for applied electromagnetism. We also study kinetic acoustic models with the CEREMA and multiphase flows with EDF.

Finally, our topics of interest are at the interaction between mathematics, computer science, High Performance Computing, physics and practical applications.

The fundamental model for plasma physics is the coupled Vlasov-Maxwell kinetic model: the Vlasov equation describes the distribution function of particles (ions and electrons), while the Maxwell equations describe the electromagnetic field. In some applications, it may be necessary to take relativistic particles into account, which leads to consider the relativistic Vlasov equation, even if in general, tokamak plasmas are supposed to be non-relativistic. The distribution function of particles depends on seven variables (three for space, three for the velocity and one for time), which yields a huge amount of computations.

To these equations we must add several types of source terms and boundary conditions for representing the walls of the tokamak, the applied electromagnetic field that confines the plasma, fuel injection, collision effects, etc.

Tokamak plasmas possess particular features, which require developing specialized theoretical and numerical tools.

Because the magnetic field is strong, the particle trajectories have a very fast rotation around the magnetic field lines. A full resolution would require a prohibitive amount of computation. It is then necessary to develop reduced models for large magnetic fields in order to obtain tractable calculations. The resulting model is called a gyrokinetic model. It allows us to reduce the dimensionality of the problem. Such models are implemented in GYSELA and Selalib.

On the boundary of the plasma, the collisions can no more be neglected. Fluid models, such as the MagnetoHydroDynamics (MHD) become again relevant. For the good operation of the tokamak, it is necessary to control MHD instabilities that arise at the plasma boundary. Computing these instabilities requires special implicit numerical discretizations with excellent long time behavior.

In addition to theoretical modelling tools, it is necessary to develop numerical schemes adapted to kinetic, gyrokinetic and fluid models. Three kinds of methods are studied in TONUS: Particle-In-Cell (PIC) methods, semi-Lagrangian and fully Eulerian approaches.

In most phenomena where oscillations are present, we can establish a
three-model hierarchy:

The Strasbourg team has a long and recognized experience in numerical methods of Vlasov-type equations. We are specialized in both particle and phase space solvers for the Vlasov equation: Particle-in-Cell (PIC) methods and semi-Lagrangian methods. We also have a long-standing collaboration with the CEA of Cadarache for the development of the GYSELA software for gyrokinetic tokamak plasmas.

The Vlasov and the gyrokinetic models are partial differential equations that express the transport of the distribution function in the phase space. In the original Vlasov case, the phase space is the six-dimension position-velocity space. For the gyrokinetic model, the phase space is five-dimensional because we consider only the parallel velocity in the direction of the magnetic field and the gyrokinetic angular velocity instead of three velocity components.

A few years ago, Eric Sonnendrücker and his collaborators introduced a new family of methods for solving transport equations in the phase space. This family of methods are the semi-Lagrangian methods. The principle of these methods is to solve the equation on a grid of the phase space. The grid points are transported with the flow of the transport equation for a time step and interpolated back periodically onto the initial grid. The method is then a mix of particle Lagrangian methods and Eulerian methods. The characteristics can be solved forward or backward in time leading to the Forward Semi-Lagrangian (FSL) or Backward Semi-Lagrangian (BSL) schemes. Conservative schemes based on this idea can be developed and are called Conservative Semi-Lagrangian (CSL).

GYSELA is a 5D full gyrokinetic code based on a classical backward semi-Lagrangian scheme (BSL) for the simulation of core turbulence that has been developed at CEA Cadarache in collaboration with our team .

More recently, we have started to apply the Semi-Lagrangian methods to more general kinetic equations. Indeed, most of the conservation laws of physics can be represented by a kinetic model with a small set of velocities and relaxation source terms . Compressible fluids or MHD equations have such representations. Semi-Lagrangian methods then become a very appealing and efficient approach for solving these equations.

Historically PIC methods have been very popular for solving the Vlasov equations. They allow solving the equations in the phase space at a relatively low cost. The main disadvantage of this approach is that, due to its random aspect, it produces an important numerical noise that has to be controlled in some way, for instance by regularizations of the particles, or by divergence correction techniques in the Maxwell solver. We have a long-standing experience in PIC methods and we started implementing them in Selalib. An important aspect is to adapt the method to new multicore computers. See the work by Crestetto and Helluy .

As already said, kinetic plasmas computer simulations are very intensive, because of the gyrokinetic turbulence. In some situations, it is possible to make assumptions on the shape of the distribution function that simplify the model. We obtain in this way a family of fluid or reduced models.

Assuming that the distribution function has a Maxwellian shape, for instance, we obtain the MagnetoHydroDynamic (MHD) model. It is physically valid only in some parts of the tokamak (at the edges for instance). The fluid model is generally obtained from the hypothesis that the collisions between particles are strong.

But the reduction is not necessarily a consequence of collisional effects. Indeed, even without collisions, the plasma may still relax to an equilibrium state over sufficiently long time scales (Landau damping effect).

In the fluid or reduced-kinetic regions, the approximation of the distribution function could require fewer data while still achieving a good representation, even in the collisionless regime.

Therefore, a fluid or a reduced model is
a model where the explicit dependency on the velocity variable is
removed. In a more mathematical way, we consider that in some regions
of the plasma, it is possible to exhibit a (preferably small) set
of parameters

In this case it is sufficient to solve for

Another way to reduce the model is to try to find an abstract kinetic representation with an as small as possible set of kinetic velocities. The kinetic approach has then only a mathematical meaning. It allows solving very efficiently many equations of physics .

As previously indicated, an efficient method for solving the reduced models is the Discontinuous Galerkin (DG) approach. It is possible to make it of arbitrary order. It requires limiters when it is applied to nonlinear PDEs occurring for instance in fluid mechanics. But the reduced models that we intent to write are essentially linear. The nonlinearity is concentrated in a few coupling source terms.

In addition, this method, when written in a special set of variables, called the entropy variables, has nice properties concerning the entropy dissipation of the model. It opens the door to constructing numerical schemes with good conservation properties and no entropy dissipation, as already used for other systems of PDEs , , , .

In tokamaks, the reduced model generally involves many time scales. Among these time scales, many of then, associated to the fastest waves, are not relevant. In order to filter them out, it is necessary to adopt implicit solvers in time. When the reduced model is based on a kinetic interpretation, it is possible to construct implicit schemes that do not impose solving costly linear systems. In addition the resulting solver is stable even at very high CFL number .

Precise resolution of the electromagnetic fields is essential for proper plasma simulation. Thus it is important to use efficient solvers for the Maxwell systems and its asymptotics: Poisson equation and magnetostatics.

The proper coupling of the electromagnetic solver with the Vlasov solver is also crucial for ensuring conservation properties and stability of the simulation.

Finally, plasma physics implies very different time scales. It is thus very important to develop implicit Maxwell solvers and Asymptotic Preserving (AP) schemes in order to obtain good behavior on long time scales.

The coupling of the Maxwell equations to the Vlasov solver requires some precautions. The most important one is to control the charge conservation errors, which are related to the divergence conditions on the electric and magnetic fields. We will generally use divergence correction tools for hyperbolic systems presented for instance in (and the references therein).

As already pointed out, in a tokamak, the plasma presents several different space and time scales. It is not possible in practice to solve the initial Vlasov-Maxwell model. It is first necessary to establish asymptotic models by letting some parameters (such as the Larmor frequency or the speed of light) tend to infinity. This is the case for the electromagnetic solver and this requires implementing implicit time solvers in order to efficiently capture the stationary state, the solution of the magnetic induction equation or the Poisson equation.

The search for alternative energy sources is a major issue for the future. Among others, controlled thermonuclear fusion in a hot hydrogen plasma is a promising possibility. The principle is to confine the plasma in a toroidal chamber, called a tokamak, and to attain the necessary temperatures to sustain nuclear fusion reactions. The International Thermonuclear Experimental Reactor (ITER) is a tokamak being constructed in Cadarache, France. This was the result of a joint decision by an international consortium made of the European Union, Canada, USA, Japan, Russia, South Korea, India and China. ITER is a huge project. As of today, the budget is estimated at 20 billion euros. The first plasma shot is planned for 2020 and the first deuterium-tritium operation for 2027. Many technical and conceptual difficulties have to be overcome before the actual exploitation of fusion energy. Consequently, much research has been carried out around magnetically confined fusion. Among these studies, it is important to carry out computer simulations of the burning plasma. Thus, mathematicians and computer scientists are also needed in the design of ITER. The reliability and the precision of numerical simulations allow a better understanding of the physical phenomena and thus would lead to better designs. TONUS's main involvement is in such research. The required temperatures to attain fusion are very high, of the order of a hundred million degrees. Thus it is imperative to prevent the plasma from touching the tokamak inner walls. This confinement is obtained thanks to intense magnetic fields. The magnetic field is created by poloidal coils, which generate the toroidal component of the field. The toroidal plasma current also induces a poloidal component of the magnetic field that twists the magnetic field lines. The twisting is very important for the stability of the plasma. The idea goes back to research by Tamm and Sakharov, two Russian physicists, in the 50's. Other devices are essential for the proper operation of the tokamak: divertor for collecting the escaping particles, microwave heating for reaching higher temperatures, fuel injector for sustaining the fusion reactions, toroidal coils for controlling instabilities, etc.

The software and numerical methods that we develop can also be applied to other fields of physics or of engineering.

For instance, we have a collaboration with the company AxesSim in Strasbourg for the development of efficient Discontinuous Galerkin (DG) solvers on hybrid computers. The applications is electromagnetic simulations for the conception of antennas, electronic devices or aircraft electromagnetic compatibility.

The acoustic conception of large rooms requires huge numerical simulations. It is not always possible to solve the full wave equation and many reduced acoustic models have been developed. A popular model consists in considering "acoustic" particles moving at the speed of sound. The resulting Partial Differential Equation (PDE) is very similar to the Vlasov equation. The same modelling is used in radiation theory. We have started to work on the reduction of the acoustic particles model and realized that our reduction approach perfectly applies to this situation. A PhD with CEREMA (Centre d'études et d'expertise sur les risques, l'environnement, la mobilité et l'aménagement) has started in October 2015 (PhD of Pierre Gerhard). The objective is to investigate the model reduction and to implement the resulting acoustic model in our DG solver.

In September 2017, we started a collaboration with EDF Chatou (PhD of Lucie Quibel) on the modelling of multiphase fluids with complex equations of state. The goal is to simulate the high temperature liquid-vapor flow occurring in a nuclear plant. Among others, we will apply our recent kinetic method for designing efficient implicit schemes for this kind of flows.

We have provided a new rigorous analysis for the stability of boundary conditions in kinetic relaxation methods. This analysis allows us to design stable and high order boundary conditions for this kind of schemes. This will lead to many practical applications in the future years.

Bruno Weber has been able to run the CLAC software, jointly developed with the AxesSim company, for simulating a Bluetooth antenna interaction with a full human body. The computations were done on the supercomputer PizDaint, which is 5th at the "Top 500" ranking.

*Conservation Laws Approximation on many Cores*

Scientific Description: It is clear now that future computers will be made of a collection of thousands of interconnected multicore processors. Globally it appears as a classical distributed memory MIMD machine. But at a lower level, each of the multicore processors is itself made of a shared memory MIMD unit (a few classical CPU cores) and a SIMD unit (a GPU). When designing new algorithms, it is important to adapt them to this kind of architecture. Our philosophy will be to program our algorithms in such a way that they can be run efficiently on this kind of computers. Practically, we will use the MPI library for managing the coarse grain parallelism, while the OpenCL library will efficiently operate the fine grain parallelism.

We have invested for several years until now into scientific computing on GPUs, using the open standard OpenCL (Open Computing Language). We were recently awarded a prize in the international AMD OpenCL innovation challenge thanks to an OpenCL two-dimensional Vlasov-Maxwell solver that fully runs on a GPU. OpenCL is a very interesting tool because it is an open standard now available on almost all brands of multicore processors and GPUs. The same parallel program can run on a GPU or a multicore processor without modification.

Because of the envisaged applications of CLAC, which may be either academic or commercial, it is necessary to conceive a modular framework. The heart of the library is made of generic parallel algorithms for solving conservation laws. The parallelism can be both fine-grained (oriented towards GPUs and multicore processors) and coarse-grained (oriented towards GPU clusters). The separate modules allow managing the meshes and some specific applications. In this way, it is possible to isolate parts that should be protected for trade secret reasons.

Functional Description: CLAC is a generic Discontinuous Galerkin solver, written in C/C++, based on the OpenCL and MPI frameworks.

Partner: AxesSim

Contact: Philippe Helluy

*SEmi-LAgrangian LIBrary*

Keywords: Plasma physics - Semilagrangian method - Parallel computing - Plasma turbulence

Scientific Description: The objective of the Selalib project (SEmi-LAgrangian LIBrary) is to develop a well-designed, organized and documented library implementing several numerical methods for kinetic models of plasma physics. Its ultimate goal is to produce gyrokinetic simulations.

Another objective of the library is to provide to physicists easy-to-use gyrokinetic solvers, based on the semi-lagrangian techniques developed by Eric Sonnendrücker and his collaborators in the past CALVI project. The new models and schemes from TONUS are also intended to be incorporated into Selalib.

Functional Description: Selalib is a collection of modules conceived to aid in the development of plasma physics simulations, particularly in the study of turbulence in fusion plasmas. Selalib offers basic capabilities from general and mathematical utilities and modules to aid in parallelization, up to pre-packaged simulations.

Partners: Max Planck Insitute - Garching - Université de Strasbourg

Contact: Philippe Helluy

*Solver for Conservative Hyperbolic Nonlinear Applications for PlasmaS*

Keywords: Discontinuous Galerkin - StarPU - Kinetic scheme

Functional Description: Generic systems of conservation laws. Specific models: fluids, Maxwell, Vlasov, acoustics (with kinetic representation). Multitasking with StarPU. Explicit solvers (RK2, RK3, RK4): accelerated with OpenCL Implicit solvers: through kinetic representations and palindromic time integration.

Contact: Philippe Helluy

Keywords: Python - Opencl

Functional Description: The code Slappy solves the advection equations on multi-patch and non-conform complex geometries with the Semi-Lagrangian method. Using this we can also treat some hyperbolic/parabolic PDE with the Approximate BGK method which, allows to write a PDE as a transport plus a local relaxation step. The code is written in PyOpcenCL and can be used on CPU/GPU.

Contact: Emmanuel Franck

*Parallel Task in Python*

Keywords: Python - Parallel computing - High order time schemes

Functional Description: Patapon is a code in PyOpenCL which allows to solve PDE like MHD using the vectorial Lattice Boltzmann method on Cartesian grids.

Contact: Philippe Helluy

Since two years we work on implicit relaxation methods to solve hyperbolic PDE without CFL and without matrices to invert. The Palindromic BGK method allows to approximate a hyperbolic system by a larger set of transport equations coupled by a nonlinear source term which relaxes the variables on an equilibrium. Using a splitting scheme, we can solve these transport equations in parallel and solve the local relaxation in a second step. The high-order extension is obtained by a symmetric modified Strang splitting and composition methods.

**Participants**: Florence Drui, Emmanuel Franck, Philippe Helluy, Laurent Navoret.

One of the drawbacks to the Palindromic BGK model is the treatment of the boundary conditions. Indeed the BGK scheme admits more variables than the original one and the boundary conditions for these additional variables are not defined. The classical choice is to impose the equilibrium at the boundary. In this case we obtain instabilities and only the first order convergence. After an analysis of the symmetric modified Strang splitting method, we have identified the dynamic for the non-physical variables and proposed boundary conditions compatible with this dynamic. We obtain stable and second order boundary conditions.

**Participants**: Clémentine Courtès (IRMA), Emmanuel Franck, Philippe Helluy, Laurent Navoret.

Another drawback of the method is the application for "two-scale" problems like Low-Mach flows. Indeed, in this case the BGK representation used generate an large error on the slow scale which is homogeneous to the fast scale. Consequently the slow scale is not well resolved. This problem comes from the fact that the BGK approximation uses a linearization with a constant fast scale to approximate all the systems. We have proposed a new method where we also introduce a slow scale in the BGK approximation. Using this, we obtain accurate results for the Euler equation in the low-Mach regime in 1D. The method gives interesting results also for other applications. In the future we must extend the method in 2D.

**Participants**: Laura Mendoza, Emmanuel Franck, Laurent Navoret.

In MHD simulations for ITER, we must also discretize with an implicit scheme the anisotropic diffusion. Firstly, we have proposed to extend the previous Palindromic BGK method to the parabolic problems. For that we must use a different Palindromic BGK model with specific parameters. We obtain a second order scheme without CFL for the Heat equation in 1D and 2D. In the future we will consider the high-order schemes and the extension to the anisotropic case.

**Participants**: Laura Mendoza, Emmanuel Franck, Philippe Helluy.

To apply the Palindromic BGK method we must have an advection solver without CFL. In the code Slappy we propose a 3D high-order Semi-Lagrangian solver able to treat blocks-structured meshes with overlapping and non-conformity. This allows to treat complex geometries easily. The solver is written in PYOpenCL and can be used on GPU. In the code the relaxation step is also implemented, which allows to use the Palindromic BGK method on some PDE (Euler, Diffusion etc).

**Participants**: Florence Drui, Emmanuel Franck, Philippe Helluy.

In the same idea, another code has been developed to treat hyperbolic systems with the BGK approach. In this case the transport is exact and consequently the method is equivalent to the Lattice Boltzmann scheme. The parallel part is similar and also based on PyOpenCL. This version is less accurate than the previous code, can be used only in Cartesian grids but is more stable and can run more complex problems. The main result is the simulation of 2D resistive MHD instabilities which have the same structure than Tokamak instabilities.

**Participants**: Emmanuel Franck

The Jorek code is the main European code for the simulation of Tokamak instabilities. The inversion of the full matrix is based on a Block Jacobi preconditioning which is not efficient in some cases and very greedy in memory. To solve the problem we investigate on splitting scheme which will allow to solve some simple subsystems separately. The splitting scheme have been tested on the first MHD model on JOREK in the quasi-linear case. In this regime the splitting gives good results since the accuracy is close to the original full implicit solver. The nonlinear case is currently studied.

**Participants**: Emmanuel Franck, Eric Sonnendruecker (IPP), Mustaga Gaja (iPP)

The works on the compatible finite elements for MHD is continued. This method allows to preserve the energy balance or the divergence free constrains with high-order finite element on complex geometries. This method is coupled with a splitting between the different physical parts and a nonlinear solver. The method gives expected results for Maxwell and Acoustic and also gives good results for the nonlinear acoustic part of the MHD model. The magnetic and convective parts of the MHD model are currently studied.

**Participants**: Emmanuel Franck, Laurent Navoret.

To apply the previous method, we must invert a nonlinear problem. A parabolization method allows to reduce the dimension of the implicit problem. However the problem is still nonlinear and ill-conditioned for strong gradient of the physical quantities. To avoid this, we propose a new relaxation method for the Euler equations (to begin) which allows to linearize the acoustic part preserving the low-Mach limit (which is the relevant regime for our application). This relaxation method allows to obtain a well-conditioned and linear implicit part. The method is validated in 1D/2D in a finite volumes context and will be extended to the high-order scheme and MHD model.

**Participants**: Michel Mehrenberger, Laurent Navoret, Nhung Pham (IRMA)

In this work, we focus on one difficulty arising in the numerical simulation of the Vlasov-Poisson system: when using a regular grid-based solver with periodic boundary conditions, perturbations present at the initial time artificially reappear at a later time. For regular finite-element mesh in velocity, we show that this recurrence time is actually linked to the spectral accuracy of the velocity quadrature when computing the charge density. In particular, choosing trigonometric quadrature weights optimally defers in time the occurrence of the recurrence phenomenon. Numerical results using both the Semi-Lagrangian Discontinuous Galerkin and the Finite Element / Semi-Lagrangian methods have been carried out and confirm the analysis.

**Participants**: Mehdi Badsi (Nantes University), Michel Mehrenberger, Laurent Navoret

We are interested in developing a numerical method for capturing stationary sheaths that a plasma forms in contact with a metallic wall. This work is based on a bi-species (ion/electron) Vlasov-Ampère model proposed in . The main question addressed in this work is to know if classical numerical schemes can preserve stationary solutions with boundary conditions, since these solutions are not a priori conserved at the discrete level. In the context of high-order semi-Lagrangian method, due to their large stencil, interpolation near the boundary of the domain also requires a specific treatment. As expected, we numerically observe that the preservation of the equilibria is very sensitive to the prescribed boundary conditions and high order schemes are mandatory to maintain the preservation of the energy in large times.

**Participants**: N. Bouzat, C. Bressan, V. Grandgirard, G. Latu, M. Mehrenberger

In magnetically confined plasmas used in Tokamak, turbulence is responsible for specific transport that limits the performance of this kind of reactors. Gyrokinetic simulations are able to capture ion and electron turbulence that give rise to heat losses, but also require state-of-the-art HPC techniques to handle computation costs. Such simulations are a major tool to establish good operating regime in Tokamak such as ITER, which is currently being built. Some of the key issues to address more realistic gyrokinetic simulations are: efficient and robust numerical schemes, accurate geometric description, good parallelization algorithms. The framework of this work is the Semi-Lagrangian setting for solving the gyrokinetic Vlasov equation and the Gysela code. In this paper, a new variant for the interpolation method is proposed that can handle the mesh singularity in the poloidal plane at r = 0 (polar system is used for the moment in Gysela). A non-uniform meshing of the poloidal plane is proposed instead of uniform one in order to save memory and computations. The interpolation method, the gyroaverage operator, and the Poisson solver are revised in order to cope with non-uniform meshes. A mapping that establishes a bijection from polar coordinates to more realistic plasma shape is used to improve realism. Convergence studies are provided to establish the validity and robustness of our new approach.

**Participants**: Ksander Ejjaaouani, Olivier Aumage, Julien Bigot, Michel Mehrenberger

Existing programming models tend to tightly interleave algorithms and optimizations in HPC simulation codes. This requires scientists to become experts in both the simulated domain and the optimization process and makes the code difficult to maintain and port to new architectures. This paper proposes the InKS programming model that decouples these two concerns with distinct languages for each. The simulation algorithm is expressed in the InKS pia language with no concern for machine-specific optimizations. Optimizations are expressed using both a family of dedicated optimizations DSLs (InKS O) and plain C++. InKS O relies on the InKS pia source to assist developers with common optimizations while C++ is used for less common ones. Our evaluation demonstrates the soundness of the approach by using it on synthetic benchmarks and the Vlasov-Poisson equation. It shows that InKS offers separation of concerns at no performance cost.

**Participants**: Y. Barsamian, A. Chargueraud, S. Hirstoaga, M. Mehrenberger

Particle-in-Cell (PIC) codes are widely used for plasma simulations. On recent multi-core hardware, performance of these codes is often limited by memory bandwidth. We describe a multi-core PIC algorithm that achieves close-to-minimal number of memory transfers with the main memory, while at the same time exploiting SIMD instructions for numerical computations and exhibiting a high degree of OpenMP-level parallelism . Our algorithm keeps particles sorted by cell at every time step, and represents particles from a same cell using a linked list of fixed-capacity arrays, called chunks. Chunks support either sequential or atomic insertions, the latter being used to handle fast-moving particles. To validate our code, called Pic-Vert, we consider a 3d electrostatic Landau-damping simulation as well as a 2d3v transverse instability of magnetized electron holes. Performance results on a 24-core Intel Sky-lake hardware confirm the effectiveness of our algorithm, in particular its high throughput and its ability to cope with fast moving particles.

**Participants**: Y. Barsamian, J. Bernier, S. Hirstoaga, M. Mehrenberger

Thanks to a classical first order dispersion analysis, we are able to check the validity of 1Dx1D two-species Vlasov-Poisson simulations. The extension to second order is performed and shown to be relevant for explaining further details. In order to validate multidimensional effects, we propose a 2Dx2D single species test problem that has true 2D effects coming from the sole second order dispersion analysis. Finally, we perform, in the same code, full 2Dx2D nonlinear two-species simulations with mass ratio around 0.01, and consider the mixing of semi-Lagrangian and Particle-in-Cell methods.

**Participants: Laura Mendoza**
Virtually all magnetic fusion devices resort to tomography diagnostics for a variety of plasma emissions. All those diagnosis have a lot in common: the plasma is transparent to the observed quantity, such that the signal on a detector is derived from a spatial integration of the local emission. Solving the direct problem (i.e. from simulated emissivity to signals) requires modeling the diagnostic geometry and is used for physics code validation or diagnostic design.
Solving the inverse problem (i.e. from experimental signals to reconstruct 2D emissivity) is useful for data interpretation and requires not only geometry modeling but also decomposing the unknown emissivity into basis functions and inversion-regularization routines.
In this context, a python library, ToFu, solves the direct and inverse problems for synthetic diagnostics. The project objective for the second part of 2018 is to develop and optimize the existing geometry module in ToFu, with a special focus on the ray-tracing algorithms.

**Participants: Philippe Helluy, Bruno Weber**
We have implemented and validated new optimizations in our Discontinuous
Galerkin (DG) codes CLAC and SCHNAPS.
In CLAC, Bruno Weber, our CIFRE PhD in the AxesSim company, has
implemented a local time-step method and optimizations in order to run
efficiently the OpenCL kernels both on CPU and GPU. This allows to run a huge
electromagnetic simulation of a Bluetooth antenna in interaction with a
full volumic human model.
The simulation was run on the supercomputer Piz Daint (3rd at the "top
500" ranking in 2017). The computing hours were awarded through a PRACE
call dedicated to small companies.
In SCHNAPS we were able to assess the efficiency of the StarPU runtime
for distributing the computational tasks efficiently on hybrid computers.

**Participants: Philippe Helluy, Lucie Quibel**
In the thesis of Lucie Quibel (started in November 2017), we study
numerical methods for solving compressible fluids with complex equation
of states. The objective is to simulate liquid-vapor flows that occur in
nuclear plants. The pressure behavior of the liquid-vapor mixture is
very complex and obtained through measurements and tabulated laws.
This sometimes prevent the system from being hyperbolic and leads to
instabilities. We are trying to construct simpler but realistic laws
that preserve the convexity structure and the scheme robustness.

We are involved in the PhD supervision of Lucie Quibel in collaboration with EDF Chatou (CIFRE support). The objective is to design new Equations Of States (EOS) for the simulation of multiphase flows. The EOS cannot be chosen arbitrarily if one wants to ensure the stability of the fluid model. We are also interested to apply our palindromic method for computing low-Mach liquid-vapor flows.

The thesis of Pierre Gerhard devoted to numerical simulation of room acoustics is supported by the Alsace region. It is a joint project with CEREMA (Centre d'études et d'expertise sur les risques, l'environnement, la mobilité et l'aménagement) in Strasbourg.

We are involved in a common project with the company AxesSim in Strasbourg. The objective is to help to the development of a commercial software for the numerical simulation of electromagnetic phenomena. The applications are directed towards antenna design and electromagnetic compatibility. This project was partly supported by DGA through "RAPID" funds. A CIFRE PhD has started in AxesSim on the same kinds of subjects in March 2015 (Bruno Weber). The new project is devoted to the use of runtime system in order to optimize DG solvers applied to electromagnetism . The resulting software will be applied to the numerical simulation of connected devices for clothes or medicine. The project is supported by the "Banque Publique d'Investissement" (BPI) and coordinated by the Thales company.

PEPS "initiative Jeunes" CNRS . E. Franck, F. Drui, C. Courtès, "Design and analysis of implicit kinetic schemes".

The TONUS project belongs to the IPL FRATRES (models and numerical methods for Tokamak).

We detail the activity founded by the IPL FRATRES:

Guillaume Morel was a post-doctoral fellow until October 2018, under the joint supervision of Nicolas Crouseilles (team IPSO, Inria Rennes) and Michel Mehrenmerger.

Work session in Saint Etienne de Tinée with Inria CASTOR TEAM in July 2018.

IPL Workshop in Alsace with Inria TEAM CASTOR AND MINGUS + exterior in November 2018.

GENCI projet *Simulations 3D de plasmas deux espèces avec des
méthodes particulaires et semi-lagrangiennes*: 400 000 scalar computing hours accepted in
October 2017 on supercomputer OCCIGEN. Coordinator: Sever Hirstoaga

PRACE project *SME HPC Adoption Programme in Europe:
full simulation of an electromagnetic wave inside and outside a fully modelled human
body*: 40 000 GPU computing hours accepted in
October 2017 on supercomputer Piz Daint. Coordinator: Bruno Weber

.

Eurofusion Enabling Research Project ER15-IPP05 Extension (1/2018-12/2018): "Global non-linear MHD modelling in toroidal geometry of disruptions, edge localized modes, and techniques for their mitigation and suppression" (Principal Investigator: Matthias Hoelzl, Max-Planck Institute for Plasma Physics, Garching).

**Participant**: Emmanuel Franck.

EoCoE-II (2019-2022): the European center of excellence EoCoE-II brings together 20 partners from 7 European countries around exascale computing for energy-oriented numerical models. Eurofusion project MAGYK, Mathematics and Algorithms for GYrokinetic and Kinetic models (2019-2021), led by E. Sonnendrucker.

**Participant**: Michel Mehrenberger.

**Participants**: David Coulette, Emmanuel Franck, Philippe Helluy [local coordinator].

ANR/SPPEXA "EXAMAG" is a joint French-German-Japanese project. Its goal is to develop efficient parallel MHD solvers for future exascale architectures. With our partners, we plan to apply highly parallelized and hybrid solvers for plasma physics. One of our objectives is to develop Lattice-Boltzmann MHD solvers based on high-order implicit Discontinous Galerkin methods, using SCHNAPS and runtime systems such as StarPU.

Lukas Tannhaüser from Würzburg University was invited in February 2018.

Philippe Helluy, Emmanuel Franck and Florence Drui visited Christian Klingenberg at Würzburg University.

Philippe has been invited to a scientific stay at the University of Tokyo in February 2018. Collaboration about Vlasov-Poisson modeling with Naoki Yoshida.

Emmanuel Franck visited Eric Sonnendruecker at IPP Gaching twice.

Philippe Helluy: Workshop on Compressible Multiphase Flows : Derivation, closure laws, thermodynamics, IRMA, May 2018.

Emmanuel Franck: Workshop IPL : Breitenbach, November 2018

Emmanuel Franck was a reviewer for

Communication in computation physics,

Journal of Computational Physics.

Philippe Helluy was a reviewer for

Computers and fluids,

Communication in computational physics,

Numerical methods for partial differential equations,

ESAIM proceedings.

Philippe Helluy is editor for:

International Journal on Finite Volume,

Computational and Applied Mathematics.

Sever Hirstoaga was a reviewer for

Acta Applicandae Mathematicae,

Journal of Approximation Theory,

Journal of Computational and Applied Mathematics.

Emmanuel Franck was invited in

Canum 2018, Cap d'Agde, June 2018 (national conference),

Workshop Jorek, Oxford , May 2018,

Workshop Achylles, Bordeaux, March 2018,

Workshop ABPDE3, Lille, September 2018,

Numkin 2018, Munich, October 2018,

Würzburg numerical analysis seminar, April 2018,

Workshop IPL , November 2018.

Florence Drui was invited in

Summer School GDR Manu, Roscoff, July 2018,

Workshop on compressible multiphase flow, Strasbourg, May 20118,

Autumn school, "Hyperbolic Conservation Laws", Würzburg, October 2018,

item Workshop IPL , November 2018.

Laura Mendoza was invited in

Numkin 2018, Munich, October 2018,

Workshop IPL , November 2018,

Research and Scientific meeting, IRFM, CEA, Cadarache.

Laurent Navoret was invited in

Numkin 2018, Munich, October 2018,

Workshop IPL , November 2018,

Seminar, Groupe de travail "Applications des Mathématiques", ENS Rennes.

Michel Merhenberger was invited in

Numkin 2018, Munich, October 2018.

Sever Hirstoaga was invited in

Workshop "Kinetic equations for Astrophysics", IRMA.

Philippe Helluy is in the evaluation committee of the "réseau calcul" of CNRS.

Philippe Helluy

Director of the IRMA mathematics institute since September 2018.

Laurent Navoret

Head of the master (Agrégation) since September 2018.

Matthieu Boileau

Elected to the "Commission de la Recherche du Conseil Académique" of Strasbourg University,

Co-leader of "réseau métier Calcul" CNRS,

Member of the "Commission du Développement Technologique". l'Inria Nancy,

Member of users comity. IDRIS,

Member of X-Stra network (IA days in 2018).

Michel Mehrenberger

Correspondent SMAI for IRMA,

Member of "École Doctorale ED209".

Florence Drui :

Numerical analysis, TD and TP, L3 (58h)

Laura Mendoza :

Scilab and algorithm, CPGE Lycée Kleber (120h).

Laurent Navoret :

Licence : Nonlinear optimisation (18h eq. TD),

Licence : Scientific computing (64 eq. TD),

Master 1: Scientifique computing 2 (35h eq. TD),

Master 2 (Agrégation) : scientific computing (38h eq. TD),

Master 2 (Agrégation) : Problem correction (12h eq. TD),

Master 2 (Cell physics) : Basics in maths (24h eq. TD),

Philippe Helluy :

L2: Scientific computing (64 eq. TD),

Master 1: graph and algorithm (30h eq. TD),

Master 2 (Agrégation) : scientific computing (30h eq. TD),

Master 2 CSMI : hyperbolic PDE (30h eq. TD),

Master 2 CSMI : Optimal control (30h eq. TD).

Matthieu Boileau:

Master 2 : data sciences (18h),

Master 1 CSMI : parallel computing (15h),

Master ESPE : Python for science (28h).

Michel Mehrenberger:

Summer school at Harbin Institute of Technology (HIT) en Chine (16h),

L1: mathematics for biology (42h),

L3: nonlinear optimisation (54h+65h),

Master 1: scientific computing (32.5h),

Master 1: python (13h),

Master 2 (agrégation): scientific computing (8h).

Michaël Gutnic:

L1: mathematics for biology (66h),

L1: statistic for biology (72h),

L3: scientific computing (32h).

Master 2 (agrégation) thesis: Louis Stutzmann, 01/2018 - 05/2018, Advisor: Sever Hirstoaga.

PhD in progress: Lucie Quibel (CIFRE support): in collaboration with EDF Chatou, from October 2017, Advisor: Philippe Helluy.

PhD in progress: Marie Houillon: "Modeling of thin wires in electromagnetic software", Advisors: Philippe Helluy and Laurent Navoret, from October 2017, Labex Irmia support.

PhD finished in November 2018: Bruno Weber(CIFRE support): "Optimization of DG software on GPU in the AxesSim company". Advisor: Philippe Helluy.

PhD in progress: Maxime Schmitt: "Optimization of scientific software with arbitrary mesh refinement", Advisors: Philippe Helluy and Cédric Bastoul (CAMUS team). Labex Irmia support.

PhD in progress: Ksander Ejjaaouani, "Conception of a programmation model, application to gyrokinetic simulations", from October 2016, Advisors: Michel Mehrenberger, Julien Bigot, Olivier Aumage.

PhD finished in December 2018: Nicolas Bouzat, "Conception of a programmation model, application to gyrokinetic simulations", from October 2015, Advisors: Michel Mehrenberger, Jean Roman, Guillaume Latu.

PhD finished in october 2018: Yann Barsamian "Optimization of PIC and SL schemes for multi species simulations", from October 2015, Advisors: Michel Mehrenberger, Sever Hirstoaga, Eric Violard.

PhD in progress: Pierre Gerhard, "Résolution des modèles cinétiques. Application à l'acoustique du bâtiment", from October 2015, Advisors: Philippe Helluy, Laurent Navoret.

M. Mehrenberger and S. Hirstoaga were members of jury of the PhD committee of Yann Barsamian, University of Strasbourg,

Philippe Helluy was member/reviewer of jury of the PhD committee of Benedict Dingfelder, TUM,

Philippe Helluy was member/reviewer of jury of the PhD committee of Gentien Marois, Bordeaux,

Philippe Helluy was member of jury of the PhD committee of Nicolas Bouzat, University of Strasbourg.

Philippe Helluy: "Des modèles mathématiques en acoustique des bâtiments" Rencontres mathématiques-architecture. Strasbourg, October 2018.

Michel Mehrenberger: participation to "Maths-en-Jean".