Eric Sonnendrücker, head of the CALVI project has obtained a position at the Max Planck Institute in Garching near Munich. He is temporarily replaced by Philippe Helluy, professor at IRMA. The team will propose a new project-team in 2013, called TONUS, for “TOkamak NUmerical Simulations”

CALVI was created in July 2003.

It is a project associating Institut Elie Cartan (IECN, UMR 7502, CNRS, Inria and Université Henri Poincaré, Nancy), Institut de Recherche Mathématique Avancée (IRMA, UMR 7501, CNRS and Université de Strasbourg) and Laboratoire des Sciences de l'Image, de l'Informatique et de la Télédétection (LSIIT, UMR 7005, CNRS and Université de Strasbourg) with close collaboration to Laboratoire de Physique des Milieux Ionisés et Applications (LPMIA, UMR 7040, CNRS and Université Henri Poincaré, Nancy).

Our main working topic is modelling, numerical simulation and visualization of phenomena coming from plasma physics and beam physics. Our applications are characterized in particular by their large size, the existence of multiple time and space scales, and their complexity.

Different approaches are used to tackle these problems. On the one hand, we try and implement modern computing techniques like **parallel computing** and **grid computing**
looking for appropriate methods and algorithms adapted to large scale problems. On the other hand we are looking for **reduced models** to decrease the size of the problems in some specific situations. Another major aspect of our research is to develop numerical methods enabling us to optimize the needed computing cost
thanks to **adaptive mesh refinement** or **model choice**. Work in scientific visualization complement these topics including **visualization of multidimensional data** involving
large data sets and **coupling visualization** and **numerical computing**.

January 2012 : Anaïs Crestetto and Philippe Helluy have been awarded the
fourth prize of the international “OpenCL Innovation Challenge” organized by the AMD
company. They have simulated the electron beam inside an X ray generator on GPU.
See http://

September 2012 : Eric Sonnendrücker has obtained a position at the Max Planck Institute in Garching.

October 2012 : Michel Mehrenberger has defended his ’Habilitation à diriger des recherches'.

plasma physics, beam physics, kinetic models, reduced models, Vlasov equation, modeling, mathematical analysis, asymptotic analysis, existence, uniqueness

Plasmas and particle beams can be described by a hierarchy of models including

The **plasma state** can be considered as the **fourth state of matter**, obtained for example by bringing a gas to a very high temperature (**plasma**, is then obtained.
Intense charged particle beams, called nonneutral plasmas by some authors, obey similar physical laws.

The hierarchy of models describing the evolution of charged particles within a plasma or a particle beam includes

In a so-called *kinetic model*, each particle species

A kinetic description is necessary in collective plasmas where the distribution function is very different from the Maxwell-Boltzmann (or Maxwellian) distribution which corresponds to the thermodynamical equilibrium, otherwise a fluid description is generally sufficient. In the limit when collective effects
are dominant with respect to binary collisions, the corresponding kinetic equation
is the *Vlasov equation*

which expresses that the distribution function

which describes the evolution of the electromagnetic field generated by the charge density

and current density

associated to the charged particles.

When binary particle-particle interactions are dominant with respect to the mean-field effects then the distribution function

where

The numerical solution of the three-dimensional Vlasov-Maxwell system represents a considerable challenge due to the huge size of the problem. Indeed, the Vlasov-Maxwell system is nonlinear and posed in phase space. It thus depends on seven variables: three configuration space variables, three velocity space variables and time, for each species of particles. This feature makes it essential to use every possible option to find a reduced model wherever possible, in particular when there are geometrical symmetries or small terms which can be neglected.

The mathematical analysis of the Vlasov equation is essential for a thorough understanding of the model as well for physical as for numerical purposes. It has attracted many researchers since the end of the 1970s. Among the most important results which have been obtained, we can cite the existence of strong and weak solutions of the Vlasov-Poisson system by Horst and Hunze , see also Bardos and Degond . The existence of a weak solution for the Vlasov-Maxwell system has been proved by Di Perna and Lions . An overview of the theory is presented in a book by Glassey .

Many questions concerning for example uniqueness or existence of strong solutions for the three-dimensional Vlasov-Maxwell system are still open. Moreover, their is a realm of approached models that need to be investigated. In particular, the Vlasov-Darwin model for which we could recently prove the existence of global solutions for small initial data .

On the other hand, the asymptotic study of the Vlasov equation in different physical situations is important in order to find or justify reduced models. One situation of major importance in tokamaks, used for magnetic fusion as well as in atmospheric plasmas, is the case of a large external magnetic field used for confining the particles. The magnetic field tends to incurve the particle trajectories which eventually, when the magnetic field is large, are confined along the magnetic field lines. Moreover, when an electric field is present, the particles drift in a direction perpendicular to the magnetic and to the electric field. The new time scale linked to the cyclotron frequency, which is the frequency of rotation around the magnetic field lines, comes in addition to the other time scales present in the system like the plasma frequencies of the different particle species. Thus, many different time scales as well as length scales linked in particular to the different Debye length are present in the system. Depending on the effects that need to be studied, asymptotic techniques allow to find reduced models. In this spirit, in the case of large magnetic fields, recent results have been obtained by Golse and Saint-Raymond , as well as by Brenier . Our group has also contributed to this problem using homogenization techniques to justify the guiding center model and the finite Larmor radius model which are used by physicist in this setting , , .

Another important asymptotic problem yielding reduced models for the Vlasov-Maxwell system is the fluid limit of collisionless plasmas. In some specific physical situations, the infinite system of velocity moments of the Vlasov equations can be closed after a few of those, thus yielding fluid models.

Numerical methods, Vlasov equation, unstructured grids, adaptivity, numerical analysis, convergence, Semi-Lagrangian method The development of efficient numerical methods is essential for the simulation of plasmas and beams. Indeed, kinetic models are posed in phase space and thus the number of dimensions is doubled. Our main effort lies in developing methods using a phase-space grid as opposed to particle methods. In order to make such methods efficient, it is essential to consider means for optimizing the number of mesh points. This is done through different adaptive strategies. In order to understand the methods, it is also important to perform their mathematical analysis. Since a few years we are interested also with solvers that uses Particle In Cell method. This new issue allows us to enrich some parts of our research activities previously centered on the Semi-Lagrangian approach.

The numerical integration of the Vlasov equation is one of the key challenges of computational plasma physics. Since the early days of this discipline, an intensive work on this subject has produced many different numerical schemes. One of those, namely the Particle-In-Cell (PIC) technique, has been by far the most widely used. Indeed it belongs to the class of Monte Carlo particle methods which are independent of dimension and thus become very efficient when dimension increases which is the case of the Vlasov equation posed in phase space. However these methods converge slowly when the number of particles increases, hence if the complexity of grid based methods can be decreased, they can be the better choice in some situations. This is the reason why one of the main challenges we address is the development and analysis of adaptive grid methods.

Exploring grid based methods for the Vlasov equation, it becomes obvious that they have different stability and accuracy properties. In order to fully understand what are the important features of a given scheme and how to derive schemes with the desired properties, it is essential to perform a thorough mathematical analysis of this scheme, investigating in particular its stability and convergence towards the exact solution.

The semi-Lagrangian method consists in computing a numerical approximation
of the solution of the Vlasov equation
on a phase space grid by using the
property of the equation that the distribution function

where

with initial conditions

From this property,

For all

As

This method can be simplified by performing a time-splitting separating the advection phases in physical space and velocity space, as in this case the characteristics can be solved explicitly.

Uniform meshes are most of the time not efficient to solve a problem in plasma physics or beam physics as the distribution of particles is evolving a lot as well in space as in time during the simulation. In order to get optimal complexity, it is essential to use meshes that are fitted to the actual distribution of particles. If the global distribution is not uniform in space but remains locally mostly the same in time, one possible approach could be to use an unstructured mesh of phase space which allows to put the grid points as desired. Another idea, if the distribution evolves a lot in time is to use a different grid at each time step which is easily feasible with a semi-Lagrangian method. And finally, the most complex and powerful method is to use a fully adaptive mesh which evolves locally according to variations of the distribution function in time. The evolution can be based on a posteriori estimates or on multi-resolution techniques.

The solutions to Maxwell's equations are *a priori* defined in a function space
such that the curl and the divergence are square integrable and
that satisfy
the electric and magnetic boundary conditions.
Those solutions are in fact smoother (all the derivatives are square integrable)
when the boundary of the domain is smooth or convex. This is no
longer true when the domain exhibits non-convex *geometrical singularities*
(corners, vertices or edges).

Physically, the electromagnetic field tends to infinity in the neighbourhood of the re-entrant singularities, which is a challenge to the usual finite element methods. Nodal elements cannot converge towards the physical solution. Edge elements demand considerable mesh refinement in order to represent those infinities, which is not only time- and memory-consuming, but potentially catastrophic when solving time dependent equations: the CFL condition then imposes a very small time step. Moreover, the fields computed by edge elements are discontinuous, which can create considerable numerical noise when the Maxwell solver is embedded in a plasma (e.g. PIC) code.

In order to overcome this dilemma, a method consists in splitting the solution
as the sum of a *regular* part, computed by
nodal elements, and a *singular* part which we relate to singular
solutions of the
Laplace operator, thus allowing to calculate a local analytic representation.
This makes it possible to compute the solution precisely without having
to refine the mesh.

This *Singular Complement Method* (SCM) had been developed
and implemented in plane geometry.

An especially interesting case is axisymmetric geometry. This is still a 2D geometry, but more realistic than the plane case; despite its practical interest, it had been subject to much fewer theoretical studies . The non-density result for regular fields was proven , the singularities of the electromagnetic field were related to that of modified Laplacians , and expressions of the singular fields were calculated . Thus the SCM was extended to this geometry. It was then implemented by F. Assous (now at Bar-Ilan University, Israel) and S. Labrunie in a PIC–finite element Vlasov–Maxwell code .

As a byproduct, space-time regularity results were obtained for the solution to time-dependent Maxwell's equation in presence of geometrical singularities in the plane and axisymmetric cases , .

Parallelism, domain decomposition, code transformation

The applications we consider lead to very large size computational problems for which we need to apply modern computing techniques enabling to use efficiently many computers including traditional high performance parallel computers and computational grids.

The full Vlasov-Maxwell system yields a very large computational problem mostly because the Vlasov equation is posed in six-dimensional phase-space. In order to tackle the most realistic possible physical problems, it is important to use all the modern computing power and techniques, in particular parallelism and grid computing.

An important issue for the practical use of the methods we develop is their parallelization. We address the problem of tuning these methods to homogeneous or heterogeneous architectures with the aim of meeting increasing computing resources requirements.

Most of the considered numerical methods apply a series of operations identically to all elements of a geometric data structure: the mesh of phase space. Therefore these methods intrinsically can be viewed as a data-parallel algorithm. A major advantage of this data-parallel approach derives from its scalability. Because operations may be applied identically to many data items in parallel, the amount of parallelism is dictated by the problem size.

Parallelism, for such data-parallel PDE solvers, is achieved by partitioning the mesh and mapping the sub-meshes onto the processors of a parallel architecture. A good partition balances the workload while minimizing the communications overhead. Many interesting heuristics have been proposed to compute near-optimal partitions of a (regular or irregular) mesh. For instance, the heuristics based on space-filing curves give very good results for a very low cost.

Adaptive methods include a mesh refinement step and can highly reduce memory usage and computation volume. As a result, they induce a load imbalance and require to dynamically distribute the adaptive mesh. A problem is then to combine distribution and resolution components of the adaptive methods with the aim of minimizing communications. Data locality expression is of major importance for solving such problems. We use our experience of data-parallelism and the underlying concepts for expressing data locality , optimizing the considered methods and specifying new data-parallel algorithms.

As a general rule, the complexity of adaptive methods requires to define software abstractions allowing to separate/integrate the various components of the considered numerical methods (see as an example of such modular software infrastructure).

Another key point is the joint use of heterogeneous architectures and adaptive meshes. It requires to develop new algorithms which include new load balancing techniques. In that case, it may be interesting to combine several parallel programming paradigms, i.e. data-parallelism with other lower-level ones.

Moreover, exploiting heterogeneous architectures requires the use of a run time support associated with a programming interface that enables some low-level hardware characteristics to be unified. Such run time support is the basis for heterogeneous algorithmics. Candidates for such a run time support may be specific implementations of MPI such as MPICH-G2 (a grid-enabled MPI implementation on top of the GLOBUS tool kit for grid computing ).

Our general approach for designing efficient parallel algorithms is to define code transformations at any level. These transformations can be used to incrementally tune codes to a target architecture and they warrant code reusability.

Inertial fusion, magnetic fusion, ITER, particle accelerators, laser-matter interaction

Controlled fusion is one of the major prospects for a long term source of energy. Two main research directions are studied: magnetic fusion where the plasma is confined in tokamaks using a large external magnetic field and inertial fusion where the plasma is confined thanks to intense laser or particle beams. The simulation tools we develop can be applied for both approaches.

Controlled fusion is one of the major challenges of the 21st century that can answer the need for a long term source of energy that does not accumulate wastes and is safe. The nuclear fusion reaction is based on the fusion of atoms like Deuterium and Tritium. These can be obtained from the water of the oceans that is widely available and the reaction does not produce long-term radioactive wastes, unlike today's nuclear power plants which are based on nuclear fission.

Two major research approaches are followed towards the objective of fusion based nuclear plants: magnetic fusion and inertial fusion. In order to achieve a sustained fusion reaction, it is necessary to confine sufficiently the plasma for a long enough time. If the confinement density is higher, the confinement time can be shorter but the product needs to be greater than some threshold value.

The idea behind magnetic fusion is to use large toroidal devices called tokamaks in which the plasma can be confined thanks to large applied magnetic field.
The international project ITER

The inertial fusion concept consists in using intense laser beams or particle beams to confine a small target containing the Deuterium and Tritium atoms. The Laser Mégajoule which is being built at CEA in Bordeaux will be used for experiments using this approach.

Nonlinear wave-wave interactions are primary mechanisms by which nonlinear fields evolve in time. Understanding the detailed interactions between nonlinear waves is an area of fundamental physics research in classical field theory, hydrodynamics and statistical physics. A large amplitude coherent wave will tend to couple to the natural modes of the medium it is in and transfer energy to the internal degrees of freedom of that system. This is particularly so in the case of high power lasers which are monochromatic, coherent sources of high intensity radiation. Just as in the other states of matter, a high laser beam in a plasma can give rise to stimulated Raman and Brillouin scattering (respectively SRS and SBS). These are three wave parametric instabilities where two small amplitude daughter waves grow exponentially at the expense of the pump wave, once phase matching conditions between the waves are satisfied and threshold power levels are exceeded. The illumination of the target must be uniform enough to allow symmetric implosion. In addition, parametric instabilities in the underdense coronal plasma must not reflect away or scatter a significant fraction of the incident light (via SRS or SBS), nor should they produce significant levels of hot electrons (via SRS), which can preheat the fuel and make its isentropic compression far less efficient. Understanding how these deleterious parametric processes function, what non uniformities and imperfections can degrade their strength, how they saturate and interdepend, all can benefit the design of new laser and target configuration which would minimize their undesirable features in inertial confinement fusion. Clearly, the physics of parametric instabilities must be well understood in order to rationally avoid their perils in the varied plasma and illumination conditions which will be employed in the National Ignition Facility or LMJ lasers. Despite the thirty-year history of the field, much remains to be investigated.

Our work in modelling and numerical simulation of plasmas and particle beams can be applied to problems like laser-matter interaction, the study of parametric instabilities (Raman, Brillouin), the fast ignitor concept in the laser fusion research as well as for the transport of particle beams in accelerators. Another application is devoted to the development of Vlasov gyrokinetic codes in the framework of the magnetic fusion programme in collaboration with the Department of Research on Controlled Fusion at CEA Cadarache. Finally, we work in collaboration with the American Heavy Ion Fusion Virtual National Laboratory, regrouping teams from laboratories in Berkeley, Livermore and Princeton on the development of simulation tools for the evolution of particle beams in accelerators.

Kinetic models like the Vlasov equation can also be applied for the study of large nano-particles as approximate models when ab initio approaches are too costly.

In order to model and interpret experimental results obtained with large nano-particles, ab initio methods cannot be employed as they involve prohibitive computational times. A possible alternative resorts to the use of kinetic methods originally developed both in nuclear and plasma physics, for which the valence electrons are assimilated to an inhomogeneous electron plasma. The LPMIA (Nancy) possesses a long experience on the theoretical and computational methods currently used for the solution of kinetic equation of the Vlasov and Wigner type, particularly in the field of plasma physics.

Using a Vlasov Eulerian code, we have investigated in detail the microscopic electron dynamics in the relevant phase space. Thanks to a numerical scheme recently developed by Filbet et al. , the fermionic character of the electron distribution can be preserved at all times. This is a crucial feature that allowed us to obtain numerical results over long times, so that the electron thermalization in confined nano-structures could be studied.

The nano-particle was excited by imparting a small velocity shift to the electron distribution. In the small perturbation regime, we recover the results of linear theory, namely oscillations at the Mie frequency and Landau damping. For larger perturbations nonlinear effects were observed to modify the shape of the electron distribution.

For longer time, electron thermalization is observed: as the oscillations are damped, the center of mass energy is entirely converted into thermal energy (kinetic energy around the Fermi surface). Note that this thermalization process takes place even in the absence of electron-electron collisions, as only the electric mean-field is present.

Under the 'Fusion' large scale initiative, we have continued our work in the development of the ADT Selalib (the Semi-Lagrangian Library), now finishing its second year. This library provides building blocks for the development of numerical simulations for the solution of the fundamental equation of plasma physics: the Vlasov equation. In this context we have continued to add new modules improved interfaces and implemented 'continuous integration' software development techniques to improve code robustness and portability. Furthermore, we continue to involve other researchers within France and abroad to aid in the further development of this software product.

One of the aims of the ADT is to provide numerical building blocks for the GYSELA code developed at CEA Cadarache in collaboration with the Calvi project-team. GYSELA is used by physicists for simulating the development of turbulence in magnetic fusion plasmas in particular in view of the ITER project.

The objective of the three-dimensional parallel software CM2 (Code Multiéchelle Multiphysique) software is to implement a general solver for hyperbolic conservation laws. It is for instance able to solve the MHD model. CLAC is a C++ OpenCL/MPI based library derived from algorithms and ideas developed in CM2. CLAC means “Compute Language Approximation of Conservation laws”.

It is clear now that a future supercomputer will be made of a collection of thousands of interconnected multicore processors. Globally it appears as a classical distributed memory MIMD machine. But at a lower level, each of the multicore processors is itself made of a shared memory MIMD unit (a few classical CPU cores) and a SIMD unit (a GPU). When designing new algorithms, it is important to adapt them for this architecture. Our philosophy will be to program our algorithms in such a way that they can be run efficiently on this kind of computers. Practically, we will use the MPI library for managing the high level parallelism, while the OpenCL library will efficiently operate the low level parallelism.

We have invested for several years now into scientific computing on GPU, using the open standard OpenCL (Open Computing Language). With Anaïs Crestetto, who is preparing a PhD in the CALVI project, we were recently awarded a prize in the international AMD OpenCL innovation challenge thanks. We have developed an OpenCL 2D Vlasov-Maxwell solver, coupling a PIC and a DG algorithms, which fully runs on a GPU. OpenCL is a very interesting tool because it is an open standard now available on almost all brands of multicore processors and GPU. The same parallel program can run on a GPU or a multicore processor without modification.

CLAC is written in C++, which is almost mandatory, because we use the OpenCL library. It also uses the MPI paradigm and is thus able to run on a cluster of GPU. CLAC is also inside a collaboration with a Strasbourg SME, AxesSim, which develops software for electromagnetic simulations. Thomas Strub, who is employed in AxesSim with a CIFRE position, is doing his Ph. D. on the conception and the development of CLAC applied to electromagnetic problems.

Because of the envisaged applications of CLAC, which may be either academical or commercial, it is necessary to conceive a modular framework. The heart of the library is made of generic parallel algorithms for solving conservation laws. The parallelism can be both fine grain (oriented towards GPU and multicore processors) and large grain (oriented towards GPU clusters). The separated modules allow managing the meshes and some specific applications. In this way, it is possible to isolate parts that can be protected by trade secret.

In a work in progress by E. Frénod and M. Lutz, the deduction of the Geometrical Gyro-Kinetic Approximation, which was originally obtained by Littlejohn in , , using a physical approach which was mathematically formal, is done. The rigorous mathematical theory is built and explained in a form for providing it, especially, for analysts, applied mathematicians and computer scientists.

In collaboration with Fahd Karami (Université Cadi Ayyad, Morocco) and Bruno Pinçon
(Université de Lorraine and project-team CORIDA), we conducted in
a theoretical and numerical
study of the so-called “point effect” in plasma physics. The model (stationary Vlasov–Poisson
system with external potential) corresponds a fully ionised plasma considered on a time scale
much smaller than that of ions, but much larger than that of electrons. It appears as the relevant
non-linear generalisation of the electrostatic Poisson equation. This may be a first step toward
a quasi-equilibrium model valid on a larger time scale, where the equilibrium description of the
electrons would be coupled to a kinetic or fluid model for the ions. This approximation is
classical in plasma physics.
We proved a general existence result for our model in a bounded domain

During CEMRACS 2011, we have started the project to test on a simplified model the Two-Scale Asymptotic-Preserving Schemes. The model, a two dimensional in phase space Vlasov-Poisson equation with small parameter, is used for a long time simulation of a beam in a focusing channel. This work was already done in in the case where the solution is approximated by the two scale limit. The first goal is to improve this approximation, by going further, to the first order one; this was done in . The second goal is to replace this approximation by an exact decomposition, using the macro-micro framework. This last approach will permit to treat the case of a not necessary small parameter. In order to accomplish the first task we have writen a Particle-In-Cell code in SeLaLib.

The work is devoted to the numerical
simulation of the Vlasov equation
in the fluid limit using particles. To that purpose, we first perform a
micro-macro decomposition as in
where asymptotic preserving schemes have been derived in the fluid
limit. In , a uniform grid was used
to approximate both the micro and the macro part of the full
distribution function. Here, we modify this approach by using a particle
approximation for the kinetic (micro) part, the fluid (macro) part being
always discretized by standard finite volume schemes. There are many
advantages in doing so:

In and , we are interested in the numerical solution of the collisionless kinetic or gyrokinetic equations of Vlasov type needed for example for many problems in plasma physics. Different numerical methods are classically used, the most used is the Particle In Cell method, but Eulerian and Semi-Lagrangian (SL) methods that use a grid of phase space are also very interesting for some applications. Rather than using a uniform mesh of phase space which is mostly done, the structure of the solution, as a large variation of the gradients on different parts of phase space or a strong anisotropy of the solution, can sometimes be such that it is more interesting to use a more complex mesh. This is the case in particular for gyrokinetic simulations for magnetic fusion applications. We develop here a generalization of the Semi-Lagrangian method on mapped meshes. Classical Backward Semi-Lagrangian methods (BSL), Conservative Semi-Lagrangian methods based on one-dimensional splitting or Forward Semi-Lagrangian methods (FSL) have to be revisited in this case of mapped meshes. We consider here the problematic of conserving exactly some equilibrium of the distribution function, by using an adapted mapped mesh, which fits on the isolines of the Hamiltonian. This could be useful in particular for Tokamak simulations where instabilities around some equilibrium are investigated. We also consider the problem of mass conservation. In the cartesian framework, the FSL method automatically conserves the mass, as the advective and conservative form are shown to be equivalent. This does not remain true in the general curvilinear case. Numerical results are given on some gyrokinetic simulations performed with the GYSELA code and show the benefit of using a mass conservative scheme like the conservative version of the FSL scheme. Inaccurate description of the equilibrium can yield to spurious effects in gyrokinetic turbulence simulations. Also, the Vlasov solver and time integration schemes impact the conservation of physical quantities, especially in long-term simulations. Equilibrium and Vlasov solver have to be tuned in order to preserve constant states (equilibrium) and to provide good conservation property along time (mass to begin with). Several illustrative simple test cases are given to show typical spurious effects that one can observes for poor settings. We explain why Forward Semi-Lagrangian scheme bring us some benefits. Some toroidal and cylindrical GYSELA runs are shown that use FSL.

We are developing finite-element codes for the Vlasov-Poisson system that would be able to capture the filamentation phenomenon. The filamentation is a mechanism that transfers the space fluctuations of the distribution function to high frequency oscillations in the velocity direction. For stability purpose, most numerical schemes contain dissipation that may affect the precision of the finest oscillations that could be resolved. In , , Eliasson constructs a non reflecting and dissipative condition for the Fourier-transformed Vlasov-Poisson system. The condition enables the high velocity-frequency oscillations to leave the computational domain in a clean way.

We are currently developing a finite-element code based on this dissipative boundary condition. The code is part of the Selalib library. We also propose an approximation of the Eliasson method, based on the Béranger's PML formalism. Contrary to the original boudary conditions that requires a space Fourier transformation, this method is local and thus could be extended to higher dimensionnal problems and more complex geometries.

In paper , we present two new codes devoted to the study of ion temperature gradient (ITG) driven plasma turbulence in cylindrical geometry using a drift-kinetic multi-water-bag model for ion dynamics. Both codes were developed to complement the Runge-Kutta semi-lagrangian multi-water-bag code GMWB3D-SLC described in . The CYLGYR code is an eigenvalue solver performing linear stability analysis from given mean radial profiles. It features three resolution schemes and three parallel velocity response models (fluid, multi-water-bag, continuous Maxwellian). The QUALIMUWABA quasi-linear code is an initial value code allowing the study of zonal flow influence on drift-waves dynamics. Cross-validation test performed between the three codes show good agreement on both temporal and spatial characteristics of unstable modes in the linear growth phase.

In (see also ), we present an implementation of a Vlasov-Maxwell solver for multicore processors. The Vlasov equation describes the evolution of charged particles in an electromagnetic field, solution of the Maxwell equations. We propose to solve the Vlasov equation by a Particle-In-Cell method (PIC), while the Maxwell system is computed by a Discontinuous Galerkin method. These methods are detailed, as well as the emission law for the particles and the implementation of the boundary conditions. We use the OpenCL framework, which allows our code to run on multicore processors or recent Graphic Processing Units (GPU). The key points of the implementation on this architecture are presented. We then study several numerical applications to two-dimensional test cases in cartesian geometry. The acceleration between the computation on a CPU and on a graphic card is very high, especially for the Maxwell part.

We have started a new software project called CLAC (for “Conservation Laws Approximation on many Cores”). This a 3D Discontinuous Galerkin solver, which runs on cluster of GPU's, thanks to the OpenCL environment and the MPI library. CLAC is open source and developed in collaboration with the AxesSim company, a SME near Strasbourg. For the moment, it is applied to the Maxwell equations. But we plan to apply it to the MHD equations or mixed kinetic/fluid plasma models.

One of the main challenges for future tokamak operation, such as ITER, is constituted by the large heat load on the divertor plates. The divertor surfaces are constantly bombarded with high-energy particles and may see their lifetime considerably reduced. The intensity of the particles and energy fluxes is particularly high during transient events known as edge-localised modes (ELMs). Our purpose here is to propose and investigate a kinetic model for ELMs.

In this contribution we propose a set of modified free-streaming equations in order to overcome the above drawbacks. More precisely, some hypotheses on the Maxwellian initial condition lead to a model that includes the self-consistent electric potential. Assuming quasinetrality and using energy conservation we could derive analytical formulae for the electron quantities. This augmented free-streaming model was benchmarked to the Vlasov-Poisson simulations reported in . The match is encouragingly good, thus justifying the applicability of the free-streaming approach.

Finally, from a computational point of view, transport in the SOL was studied by means of three different approaches – fluid, Vlasov and particle-in-cell (PIC). In spite of kinetic effects due to fast electrons which are not captured in the fluid code, the overall agreement between the codes was found to be quite satisfactory .

This work is performed in collaboration with Yves Peysson (DRFC, CEA Cadarrache). Since September 2012 this work is included in the ANR CHROME.

The aim of this project is to develop a finite element numerical method for the full-wave simulation of electromagnetic wave propagation in plasma. Full-wave calculations of the LH wave propagation is a challenging issue because of the short wave length with respect to the machine size. In the continuation of the works led in cylindrical geometry, a full toroidal description for an arbitrary poloidal cross-section of the plasma has been developed.

Since its wavelength

With such a description, usual limitations of the conventional ray tracing related to the approximation

This formulation provides a natural implementation for parallel processing, a particularly important aspect when simulations for plasmas of large size must be considered.

The domain considered is as near as possible of the cavity filled by a tokomak plasma. Toroidal coordinates are introduced. In our approach we consider Fourier decomposition in the angular coordinate to obtain stationary Maxwell equations in a cross-section of the tokamak cavity.

A finite element method is proposed for the simulation of time-harmonic electromagnetic waves in a plasma, which is an anisotropic medium. The approach chosen here is sometimes referred to as *full-wave modeling* in the literature: the original Maxwell's equations are used to obtain a second order equation for the time-harmonic electric field. These are written in a weak form using a augmented variational formulation (AVF), which takes into account the divergence. The variational formulation is then discretized using modified Taylor-Hood (nodal) elements.

During 2012 we have developed a domain decomposition method and a new behavior of the plasma density was considered in the code "FullWaveFEM". A analyze of the model considered, existence and unicity of solution, equivalence of the formulation for the domain decomposition formulation was completed in the frame of Takashi Hattori Phd thesis.

This work is performed in collaboration with José Herskovits Norman of UFRJ, Rio de Janeiro, Antonio André Novotny from the LNCC, Petropolis, both from Brazil and Alfredo Canelas from the University of the Republic, Montevideo, Uruguay.

The industrial technique of electromagnetic casting allows for contactless heating, shaping and controlling of chemical aggressive, hot melts. The main advantage over the conventional crucible shape forming is that the liquid metal does not come into contact with the crucible wall, so there is no danger of contamination. This is very important in the preparation of very pure specimens in metallurgical experiments, as even small traces of impurities, such as carbon and sulphur, can affect the physical properties of the sample. Industrial applications are, for example, electromagnetic shaping of aluminum ingots using soft-contact confinement of the liquid metal, electromagnetic shaping of components of aeronautical engines made of superalloy materials (Ni,Ti, ...), control of the structure solidification.

The electromagnetic casting is based on the repulsive forces that an electromagnetic field produces on the surface of a mass of liquid metal. In the presence of an induced electromagnetic field, the liquid metal changes its shape until an equilibrium relation between the electromagnetic pressure and the surface tension is satisfied. The direct problem in electromagnetic casting consists in determining the equilibrium shape of the liquid metal. In general, this problem can be solved either directly studying the equilibrium equation defined on the surface of the liquid metal, or minimizing an appropriate energy functional. The main advantage of this last method is that the resulting shapes are mechanically stable.

The inverse problem consists in determining the electric currents and the induced exterior field for which the liquid metal takes on a given desired shape. This is a very important problem that one needs to solve in order to define a process of electromagnetic liquid metal forming.

In a previous work we studied the inverse electromagnetic casting problem considering the case where the inductors are made of single solid-core wires with a negligible area of the cross-section. In a second paper we considered the more realistic case where each inductor is a set of bundled insulated strands. In both cases the number of inductors was fixed in advance, see . In this year we aim to overcome this constraint, and look for configurations of inductors considering different topologies with the purpose of obtaining better results. In order to manage this new situation we introduce a new formulation for the inverse problem using a shape functional based on the Kohn-Vogelius criterion. A topology optimization procedure is defined by means of topological derivatives, a new method that simplifies computation issues was considered, see and .

We have started a collaboration with the SME (Small and Medium Enterprise) AxesSim on the development of Maxwell solvers. AxesSim is specialized on scientific software for airplane electromagnetic compatibility. For the moment, one CIFRE thesis is supported by DGA. Gary Cohen from Inria Rocquencourt is also involved in the project.

Takashi Hattori, Simon Labrunie and Jean-Rodolphe Roche participate in the ANR project “CHROME” (Heating, Reflectometry and Waves for Magnetized Plasma), grouping researchers from Université Paris 6 (B. Després, M. Campos Pinto and others), the Inria project-team POEMS (E. Bécache, C. Hazard and P. Joly) and Université de Lorraine (S. Heuraux). Simon Labrunie is the head of the Lorraine team.

The CHROME project seeks to develop advanced mathematical and numerical tools for the simulation of electromagnetic waves in strongly magnetized plasmas (e.g., tokamak plasmas) in the context of reflectometry (a technique for probing the plasma by analysing the propagation of electromagnetic waves) and heating.

GYPSI project (2010–2014), https://

accepted ANR project ”PEPPSI” in Programme Blanc SIMI 9 – Sciences de l’ingénierie (Edition 2012). Participants : Giovanni Manfredi (coordinator), Sever Hirstoaga.

Stéphanie Salmon is a major member of ANR Project "VIVABRAIN" (Modèles Numériques, 2012) from 2013 to 2016.

Michel Mehrenberger is the coordinator of the project FR FCM (CNRS Federation on Magnetic Confinement Fusion), within Euratom-CEA association, Title:"Numerical Methods for GYSELA", the goal is to help improving the numerical algorithms used by the GYSELA code developed at CEA Cadarache for the simulation of turbulence in magnetic fusion plasmas.

Jean Roche is the coordinator of the FR FCM project with Euratom-CEA association, Title: "Full wave modeling of lower hybrid current drive in tokamaks". The goal of this project is to develop a full wave method to describe the dynamics of lower hybrid current drive problem in tokamaks.

E. Sonnendrücker: Max Planck Institut, Munich (Germany)

We will continue to collaborate with Eric Sonnendrücker on numerical and mathematical studies for plasma physics. We also collaborate on the SeLaLib project.

Michel Mehrenberger gave invited talks at

ICOPS 2012, Edinburgh (Scotland, UK), 8-12 July 2012,
"Conservative semi-lagrangian schemes on mapped meshes", http://

4th Summer school on numerical modelling for fusion, 8-12 October 2012, IPP,
Garching near Munich "High Order Semi-Lagrangian Schemes for the Vlasov equation",
http://

Sever Hirstoaga gave an invited talk at The 9th AIMS Conference, in the Special Session 79 : “Numerical Methods based on Homogenization and on Two-Scale Convergence”, 1-5 July 2012, Orlando.

Eric Sonnendrücker gave an invited talk on “Numerical Algorithms for Gyrokinetic simulations” at the Workshop on Computational Challenges in Magnetized Plasma, IPAM, UCLA, April 16-20, 2012.

Jean Roche gave invited talks at

Engopt 2012, Rio de Janeiro, Brazil, July 2012, on “Interior Point Methods for Shape Optimization in Electromagnetic Casting”

10th WCCM, Sao Paulo, Brazil, 8-13 July 2012, on “Adaptivity in Shape Optimization with an exterior state equation”

Philippe Helluy is responsible for the second year of Master CSSI (*Calcul Scientifique et Sécurité Informatique*)
of the UFR Mathématique et Informatique, Université de Strasbourg.

Laurent Navoret is responsible for *Master Enseignement - parcours Agrégation* of the UFR Mathématique et
Informatique, Université de Strasbourg, since September 2012.

Nicolas Besse

Licence : Analyse, 90H, L2, Université de Lorraine Faculté des Sciences et Techniques, France

Licence : Analyse complexe, 16H, L3, Université de Lorraine Faculté des Sciences et Techniques, France

Licence : Analyse, 38H, L1, Université de Lorraine Faculté des Sciences et Techniques, France

Master : Modélisation mathématique et méthodes numeriques pour les plasmas de fusion, 45H, M2-Fusion, Université de Lorraine, France

Philippe Helluy

Master: EDP hyperboliques, 30 HETD, M2, Université de Strasbourg, France

Master: Contrôle Optimal, 30 HETD, M2, Université de Strasbourg, France

Master: Calcul Scientifique, 30 HETD, M1, Université de Strasbourg, France

Master: Elemental numerical methods, 30 HETD, M1 physique, Université de Strasbourg, France

Master: Recherche Opérationnelle, 30 HETD, école d'ingénieurs ENSIIE, Strasbourg, France.

Doctorat : Modèles mathématiques et numériques de la transition de phase, 30 HETD, M2, Université de Strasbourg, France

Simon Labrunie

Licence : Mathématiques générales en DUT génie civil, 55h, L1, Université de Lorraine, France

Licence : Mathématiques générales en DUT génie civil, 55h, L2, Université de Lorraine, France

Michel Mehrenberger

Licence : Optimisation non lineaire, 54h, L3, Université de Strasbourg, France

Licence : Méthodes d’Analyse Numérique, 39h, L3, ENSIIE (ecole d’ingenieur, antenne de Strasbourg), France

Licence : Analyse Numérique, 72h, L2, Université de Strasbourg, France

Licence : Calcul Formel et Simulation Numérique, 18h, L2, Université de Strasbourg, France

Master : Mathematical methods for physics, 30h, M1, Université de Strasbourg, France

Master : Spectral Analysis, 30h, M1, Université de Strasbourg, France

Master: Méthodes numériques pour les EDP, TP, 20h, M1, Université de Strasbourg, France

Laurent Navoret

Licence : TD Technique d'Analyse Numérique, 36h eq. TD, L3, Université de Strasbourg, France

Licence : Introduction numérique aux E.D.P., 43.75h eq. TD, L3, Université de Strasbourg, France

Master : Modélisation : Option Calcul Scientifique, 72 eq. TD, M2, Université de Strasbourg, France

Jean R. Roche

Licence: Mathématiques, 162 h eq. TD, L2, ESSTIN, Université de Lorraine, France.

Master : Optimisation, 30 h eq. TD, M1, ESSTIN, Université de Lorraine, France.

HdR : Michel Mehrenberger, *Inégalités d'Ingham et schémas semi-lagrangiens
pour l'équation de Vlasov*, 5 October 2012, Coordinator: Eric Sonnendrücker

PhD in progress : Céline Caldini , *Collisions dans les modèles gyrocinétiques*, Advisor: Mihai Bostan

PhD in progress : Pierre Glanc, *Approximation numérique des équations de Vlasov
par des méthodes de "remapping" conservatifs*, Advisors:
Nicolas Crouseilles, Emmanuel Frénod, Philippe Helluy and Michel Mehrenberger

PhD in progress : Nhung Pham, *Méthodes fluides généralisées pour les plasmas*, Advisor: Philippe Helluy

PhD in Progress : Michel Massaro, *Résolution numérique de lois de conservation sur architectures multicores*,
Advisors: Philippe Helluy, Catherine Mongenet.

PhD in progress : Christophe Steiner, *Etudes de l'opérateur de gyromyenne et de
son couplage avec les équations de Vlasov gyrocinétiques*, Advisors: Nicolas Crouseilles and Michel Mehrenberger

PhD in progress : Mathieu Lutz, *Etude théorique et numérique de l'approximation gyrocinétique*,
Advisors: Emmanuel Frénod and Eric Sonnendrücker

PhD in progress : Mohamed Ghattassi, *Analyse et Contrôle d'un Four*, Université de Lorraine,
Advisor: Jean Roche.

PhD in progress : Takashi Hattori, *Full wave modeling of lower hybrid current drive in tokamaks*,
Université de Lorraine, Advisors: Simon Labrunie and Jean Roche.

Simon Labrunie participated to the following Ph.D. defense committees :

Jean-Yves Moller, Ph.D. at Université de Lorraine, Title: *Eléments finis courbes et accélération
pour le transport de neutrons*, January 2012. Simon Labrunie was co-director together with Richard Sanchez
from CEA Saclay.

Sébastien Cambon, Ph.D. at INSA Toulouse. Title: *Méthodes d’éléments fins d’ordre élevé et d’équations
intégrales pour la résolution de problèmes de furtivité radar d’objets à symétrie de révolution*, July 2012.

October 2012 : Laurent Navoret gave a popularization talk, entitled "La valse relaxante des particules" and about the Landau damping, for the "Fête des sciences 2012" in Strasbourg.

December 2012 : Philippe Helluy gave a talk entitled: "How to solve PDE on GPU" to high school students at IUFM d'Alsace.