EN FR
EN FR


Section: Application Domains

Co-design for scalable numerical algorithms in scientific applications

Participants : Pierre Brenner, Jean-Marie Couteyen, Mathieu Faverge, Xavier Lacoste, Guillaume Latu, Salli Moustafa, Pierre Ramet, Fabien Rozar, Jean Roman.

The research activities concerning the ITER challenge are involved in the Inria Project Lab (IPL) C2S@Exa .

MHD instabilities edge localized modes

The numerical simulations tools designed for ITER challenges aim at making a significant progress in understanding active control methods of plasma edge MHD instabilities Edge Localized Modes (ELMs) which represent particular danger with respect to heat and particle loads for Plasma Facing Components (PFC) in ITER. Project is focused in particular on the numerical modeling study of such ELM control methods as Resonant Magnetic Perturbations (RMPs) and pellet ELM pacing both foreseen in ITER. The goals of the project are to improve understanding the related physics and propose possible new strategies to improve effectiveness of ELM control techniques. The tool for the nonlinear MHD modeling (code JOREK ) will be largely developed within the present project to include corresponding new physical models in conjunction with new developments in mathematics and computer science strategy in order to progress in urgently needed solutions for ITER.

The fully implicit time evolution scheme in the JOREK code leads to large sparse linear systems that have to be solved at every time step. The MHD model leads to very badly conditioned matrices. In principle the PaStiX library can solve these large sparse problems using a direct method. However, for large 3D problems the CPU time for the direct solver becomes too large. Iterative solution methods require a preconditioner adapted to the problem. Many of the commonly used preconditioners have been tested but no satisfactory solution has been found. The research activities presented in Section  3.3 will contribute to design new solution techniques best suited for this context.

Turbulence of plasma particules inside a tokamak

In the context of the ITER challenge, the GYSELA project aims at simulating the turbulence of plasma particules inside a tokamak. Thanks to a better comprehension of this phenomenon, it would be possible to design a new kind of source of energy based of nuclear fusion. Currently, GYSELA is parallalized in a MPI/OpenMP way and can exploit the power of the current greatest supercomputers. To simulate faithfully the plasma physic, GYSELA handles a huge amount of data. In fact, the memory consumption is a bottleneck on very large simulations (449 K cores). In this context, mastering the memory consumption of the code becomes critical to consolidate its scalability and to enable the implementation of new numerical and physical features to fully benefit from the extreme scale architectures.

The scientific objectives of these research activities are first the design of advanced generic tools to manage and to better predict and limit the memory consumption peak in order to reduce the memory footprint of GYSELA , and second to design a set of tools that analyses the performance and the topology of the targeted architecture to optimize the deployment of Gysela runs. This will allow the design of new advanced numerical methods (for the gyroaverage operator, for the source and collision operators) and efficient scalable parallel algorithms in order to be able to deal with new physics in GYSELA . In particular the objective is to tackle kinetic electron configurations for more realistic simulations.

SN Cartesian solver for nuclear core simulation

As part of its activity, EDF R&D is developing a new nuclear core simulation code named COCAGNE that relies on a Simplified PN (SPN) method to compute the neutron flux inside the core for eigenvalue calculations. In order to assess the accuracy of SPN results, a 3D Cartesian model of PWR nuclear cores has been designed and a reference neutron flux inside this core has been computed with a Monte Carlo transport code from Oak Ridge National Lab. This kind of 3D whole core probabilistic evaluation of the flux is computationally very demanding. An efficient deterministic approach is therefore required to reduce the computation effort dedicated to reference simulations.

In this collaboration, we work on the parallelization (for shared and distributed memories) of the DOMINO code, a parallel 3D Cartesian SN solver specialized for PWR core reactivity computations which is fully integrated in the COCAGNE system.

3D aerodynamics for unsteady problems with moving bodies

Aribus Defence and Space has developped for 20 years the FLUSEPA code which focuses on unsteady phenomenon with changing topology like stage separation or rocket launch. The code is based on a finite volume formulation with temporal adaptive time integration and supports bodies in relative motion. The temporal adaptive integration classifies cells in several temporal levels, zero being the level with the slowest cells and each level being twice as fast as the previous one. This repartition can evolve during the computation, leading to load-balancing issues in a parallel computation context. Bodies in relative motion are managed through a CHIMERA-like technique which allows building a composite mesh by merging multiple meshes. The meshes with the highest priorities recover the least ones, and at the boundaries of the covered mesh, an intersection is computed. Unlike classical CHIMERA technique, no interpolation is performed, allowing a conservative flow integration.

The main objective of this research is to design a new scalable version of FLUSEPA from a task-based parallelization over a runtime system in order to run efficiently on modern multicore parallel architectures very large 3D simulations (for example ARIANE 5 and 6 booster separation).