Our daily life environment is increasingly interacting with digital information. An important amount of this information is of geometric nature. It concerns the representation of our environment, the analysis and understanding of “real” phenomena, the control of physical mechanisms or processes. The interaction between physical and digital worlds is two-way. Sensors are producing digital data related to measurements or observations of our environment. Digital models are also used to “act” on the physical world. Objects that we use at home, at work, to travel, such as furniture, cars, planes, ... are nowadays produced by industrial processes which are based on digital representation of shapes. CAD-CAM (Computer Aided Design – Computer Aided Manufacturing) software is used to represent the geometry of these objects and to control the manufacturing processes which create them. The construction capabilities themselves are also expanding, with the development of 3D printers and the possibility to create daily-life objects “at home” from digital models.

The impact of geometry is also important in the analysis and understanding of phenomena. The 3D conformation of a molecule explains its biological interaction with other molecules. The profile of a wing determines its aeronautic behavior, while the shape of a bulbous bow can decrease significantly the wave resistance of a ship. Understanding such a behavior or analyzing a physical phenomenon can nowadays be achieved for many problems by numerical simulation. The precise representation of the geometry and the link between the geometric models and the numerical computation tools are closely related to the quality of these simulations. This also plays an important role in optimisation loops where the numerical simulation results are used to improve the “performance” of a model.

Geometry deals with structured and efficient representations of information and with methods to treat it. Its impact in animation, games and VAMR (Virtual, Augmented and Mixed Reality) is important. It also has a growing influence in e-trade where a consumer can evaluate, test and buy a product from its digital description. Geometric data produced for instance by 3D scanners and reconstructed models are nowadays used to memorize old works in cultural or industrial domains.

Geometry is involved in many domains (manufacturing, simulation, communication, virtual world...), raising many challenging questions related to the representations of shapes, to the analysis of their properties and to the computation with these models. The stakes are multiple: the accuracy in numerical engineering, in simulation, in optimization, the quality in design and manufacturing processes, the capacity of modeling and analysis of physical problems.

The accurate description of shapes is a long standing problem in mathematics, with an important impact in many domains, inducing strong interactions between geometry and computation. Developing precise geometric modeling techniques is a critical issue in CAD-CAM. Constructing accurate models, that can be exploited in geometric applications, from digital data produced by cameras, laser scanners, observations or simulations is also a major issue in geometry processing. A main challenge is to construct models that can capture the geometry of complex shapes, using few parameters while being precise.

Our first objective is to develop methods, which are able to describe accurately and in an efficient way, objects or phenomena of geometric nature, using algebraic representations.

The approach followed in CAGD, to describe complex geometry is based on parametric representations called NURBS (Non Uniform Rational B-Spline). The models are constructed by trimming and gluing together high order patches of algebraic surfaces. These models are built from the so-called B-Spline functions that encode a piecewise algebraic function with a prescribed regularity at knots. Although these models have many advantages and have become the standard for designing nowadays CAD models, they also have important drawbacks. Among them, the difficulty to locally refine a NURBS surface and also the topological rigidity of NURBS patches that imposes to use many such patches with trims for designing complex models, with the consequence of the appearing of cracks at the seams. To overcome these difficulties, an active area of research is to look for new blending functions for the representation of CAD models. Some examples are the so-called T-Splines, LR-Spline blending functions, or hierarchical splines, that have been recently devised in order to perform efficiently local refinement. An important problem is to analyze spline spaces associated to general subdivisions, which is of particular interest in higher order Finite Element Methods. Another challenge in geometric modeling is the efficient representation and/or reconstruction of complex objects, and the description of computational domains in numerical simulation. To construct models that can represent efficiently the geometry of complex shapes, we are interested in developing modeling methods, based on alternative constructions such as skeleton-based representations. The change of representation, in particular between parametric and implicit representations, is of particular interest in geometric computations and in its applications in CAGD.

We also plan to investigate adaptive hierarchical techniques, which can locally improve the approximation of a shape or a function. They shall be exploited to transform digital data produced by cameras, laser scanners, observations or simulations into accurate and structured algebraic models.

The precise and efficient representation of shapes also leads to the problem of
extracting and exploiting characteristic properties of shapes such as symmetry,
which is very frequent in geometry.
Reflecting the symmetry of the intended shape in the
representation appears as a natural requirement for visual quality,
but also as a possible source of sparsity of the representation.
Recognizing, encoding and exploiting symmetry requires new paradigms
of representation and further algebraic developments.
Algebraic foundations for the exploitation of symmetry in the context of non linear differential and polynomial equations are addressed.
The intent is to bring this expertise with symmetry to the geometric
models and computations developed by aromath.

In many problems, digital data are approximated and cannot just be
used as if they were exact. In the context of geometric modeling,
polynomial equations appear naturally as a way to describe
constraints between the unknown variables of a problem. An
important challenge is to take into account the input error in order
to develop robust methods for solving these algebraic constraints.
Robustness means that a small perturbation of the input should produce
a controlled variation of the output, that is forward stability, when
the input-output map is regular. In non-regular cases,
robustness also means that the output is an exact solution, or the
most coherent solution, of a problem with input
data in a given neighborhood, that is backward stability.

Our second long term objective is to develop methods to robustly and efficiently solve algebraic problems that occur in geometric modeling.

Robustness is a major issue in geometric modeling and algebraic computation. Classical methods in computer algebra, based on the paradigm of exact computation, cannot be applied directly in this context. They are not designed for stability against input perturbations. New investigations are needed to develop methods which integrate this additional dimension of the problem. Several approaches are investigated to tackle these difficulties.

One relies on linearization of algebraic problems based on “elimination of variables” or projection into a space of smaller dimension. Resultant theory provides a strong foundation for these methods, connecting the geometric properties of the solutions with explicit linear algebra on polynomial vector spaces, for families of polynomial systems (e.g., homogeneous, multi-homogeneous, sparse). Important progress has been made in the last two decades to extend this theory to new families of problems with specific geometric properties. Additional advances have been achieved more recently to exploit the syzygies between the input equations. This approach provides matrix based representations, which are particularly powerful for approximate geometric computation on parametrized curves and surfaces. They are tuned to certain classes of problems and an important issue is to detect and analyze degeneracies and to adapt them to these cases.

A more adaptive approach involves linear algebra computation in a hierarchy of polynomial vector spaces. It produces a description of quotient algebra structures, from which the solutions of polynomial systems can be recovered. This family of methods includes Gröbner Basis , which provides general tools for solving polynomial equations. Border Basis is an alternative approach, offering numerically stable methods for solving polynomial equations with approximate coefficients . An important issue is to understand and control the numerical behavior of these methods as well as their complexity and to exploit the structure of the input system.

In order to compute “only” the (real) solutions of a polynomial system in a given domain, duality techniques can also be employed. They consist in analyzing and adding constraints on the space of linear forms which vanish on the polynomial equations. Combined with semi-definite programming techniques, they provide efficient methods to compute the real solutions of algebraic equations or to solve polynomial optimization problems. The main issues are the completness of the approach, their scalability with the degree and dimension and the certification of bounds.

Singular solutions of polynomial systems can be analyzed by computing differentials, which vanish at these points. This leads to efficient deflation techniques, which transform a singular solution of a given problem into a regular solution of the transformed problem. These local methods need to be combined with more global root localisation methods.

Subdivision methods are another type of methods which are interesting for robust geometric computation. They are based on exclusion tests which certify that no solution exists in a domain and inclusion tests, which certify the uniqueness of a solution in a domain. They have shown their strength in addressing many algebraic problems, such as isolating real roots of polynomial equations or computing the topology of algebraic curves and surfaces. The main issues in these approaches is to deal with singularities and degenerate solutions.

The main domain of applications that we consider for the methods we develop is Computer Aided Design and Manufacturing.

Computer-Aided Design (CAD) involves creating digital models defined by mathematical constructions, from geometric, functional or aesthetic considerations. Computer-aided manufacturing (CAM) uses the geometrical design data to control the tools and processes, which lead to the production of real objects from their numerical descriptions.

CAD-CAM systems provide tools for visualizing, understanding, manipulating, and editing virtual shapes. They are extensively used in many applications, including automotive, shipbuilding, aerospace industries, industrial and architectural design, prosthetics, and many more. They are also widely used to produce computer animation for special effects in movies, advertising and technical manuals, or for digital content creation. Their economic importance is enormous. Their importance in education is also growing, as they are more and more used in schools and educational purposes.

CAD-CAM has been a major driving force for research developments in geometric modeling, which leads to very large software, produced and sold by big companies, capable of assisting engineers in all the steps from design to manufacturing.

Nevertheless, many challenges still need to be addressed. Many problems remain open, related to the use of efficient shape representations, of geometric models specific to some application domains, such as in architecture, naval engineering, mechanical constructions, manufacturing ...Important questions on the robustness and the certification of geometric computation are not yet answered. The complexity of the models which are used nowadays also appeals for the development of new approaches. The manufacturing environment is also increasingly complex, with new type of machine tools including: turning, 5-axes machining and wire EDM (Electrical Discharge Machining), 3D printer. It cannot be properly used without computer assistance, which raises methodological and algorithmic questions. There is an increasing need to combine design and simulation, for analyzing the physical behavior of a model and for optimal design.

The field has deeply changed over the last decades, with the emergence of new geometric modeling tools built on dedicated packages, which are mixing different scientific areas to address specific applications. It is providing new opportunities to apply new geometric modeling methods, output from research activities.

A major bottleneck in the CAD-CAM developments is the lack of interoperability of modeling systems and simulation systems. This is strongly influenced by their development history, as they have been following different paths.

The geometric tools have evolved from supporting a limited number of tasks at separate stages in product development and manufacturing, to being essential in all phases from initial design through manufacturing.

Current Finite Element Analysis (FEA) technology was already well
established 40 years ago, when CAD-systems just started to
appear, and its success stems from using approximations of both the
geometry and the analysis model with low order finite elements (most
often of degree

There has been no requirement between CAD and numerical simulation, based on Finite Element Analysis, leading to incompatible mathematical representations in CAD and FEA. This incompatibility makes interoperability of CAD/CAM and FEA very challenging. In the general case today this challenge is addressed by expensive and time-consuming human intervention and software developments.

Improving this interaction by using adequate geometric and functional descriptions should boost the interaction between numerical analysis and geometric modeling, with important implications in shape optimization. In particular, it could provide a better feedback of numerical simulations on the geometric model in a design optimization loop, which incorporates iterative analysis steps.

The situation is evolving. In the past decade, a new paradigm has emerged to replace the traditional Finite Elements by B-Spline basis element of any polynomial degree, thus in principle enabling exact representation of all shapes that can be modeled in CAD. It has been demonstrated that the so-called isogeometric analysis approach can be far more accurate than traditional FEA.

It opens new perspectives for the interoperability between geometric modeling and numerical simulation. The development of numerical methods of high order using a precise description of the shapes raises questions on piecewise polynomial elements, on the description of computational domains and of their interfaces, on the construction of good function spaces to approximate physical solutions. All these problems involve geometric considerations and are closely related to the theory of splines and to the geometric methods we are investigating. We plan to apply our work to the development of new interactions between geometric modeling and numerical solvers.

G+Smo (pronounced gismo or gizmo) is a C++ library for isogeometric analysis (IGA).

G+Smo (Geometry + Simulation Modules, pronounced "gismo") is an open-source C++ library that brings together mathematical tools for geometric design and numerical simulation. It implements the relatively new paradigm of isogeometric analysis, which suggests the use of a unified framework in the design and analysis pipeline. G+Smo is an object-oriented, cross-platform, template C++ library and follows the generic programming principle, with a focus on both efficiency and ease of use. The library aims at providing access to high quality, open-source software to the forming isogeometric numerical simulation community and beyond. Geometry plus simulation modules aims at the seamless integration of Computer-aided Design (CAD) and high order Finite Element Analysis (FEA).

The library and its documentation are available at https://gismo.github.io/

The package provides efficient tools to build convex relaxations of moment sequences and their dual Sum-of-Squares relaxations, to optimize vectors of moment sequences that satisfy positivity constraints or mass constraints, to compute global minimizers of polynomial and moment optimization problems from moment sequences, polar ideals, approximate real radical. It also provides tools for computing minimum enclosing ellipsoids of basic semi-algebraic sets. It uses a connection with SDP solvers via the JuMP interface.

The package is available at https://gitlab.inria.fr/AlgebraicGeometricModeling/MomentTools.jl and its documentation at http://www-sop.inria.fr/members/Bernard.Mourrain/software/MomentTools/

TensorDec is a Julia package for the decomposition of tensors and polynomial-exponential series. It provides tools to compute rank decomposition or Waring decomposition of symmetric tensors or multivariate homogeneous, of multilinear tensors, of multivariate series as sums of polynomial-exponential series, of measures as weighted sums of Diracs from moments, tools to perform sparse interpolation.

It allows to compute low rank tensor approximations of given tensors, using Riemannian optimization techniques, with well-chosen initial start. It also provides tools to compute catalecticant or Hankel operators associated to tensors and their apolar ideal.

The package is accessible at https://gitlab.inria.fr/AlgebraicGeometricModeling/TensorDec.jl and its documentation at http://www-sop.inria.fr/members/Bernard.Mourrain/software/TensorDec/

In

14, we analyse the representation of positive polynomials in terms of Sums of Squares. We provide a quantitative version of Putinar’s Positivstellensatz over a compact basic semialgebraic set S, with a new polynomial bound on the degree of the positivity certificates. This bound involves a Łojasiewicz exponent associated to the description of S. We show that if the gradients of the active constraints are linearly independent on S (Constraint Qualification condition), this Łojasiewicz exponent is equal to 1. We deduce the first general polynomial bound on the convergence rate of the optima in Lasserre’s Sum-of-Squares hierarchy to the global optimum of a polynomial function on S, and the first general bound on the Hausdorff distance between the cone of truncated (probability) measures supported on S and the cone of truncated pseudo-moment sequences, which are positive on the quadratic module of S.

In 21, we address the description of the tropicalization of families of rational varieties under parametrizations with prescribed support, via curve valuations. We recover and extend results by Sturmfels, Tevelev and Yu for generic coefficients, considering rational parametrizations with non-trivial denominator. The advantage of our point of view is that it can be generalized to deal with non-generic parametrizations. We provide a detailed analysis of the degree of the closed image, based on combinatorial conditions on the relative positions of the supports of the polynomials defining the parametrization. We obtain a new formula and finer bounds on the degree, when the supports of the polynomials are different. We also present a new formula and bounds for the order at the origin in case the closed image is a hypersurface.

Given rational univariate polynomials

and

such that

and

are relatively prime, we show in

27that

is non-negative on all the real roots of

if and only if

is a sum of squares of rational polynomials modulo

. We complete our study by exhibiting an algorithm that produces a certificate that a polynomial

is non-negative on the real roots of a non-zero polynomial

, when the above assumption is satisfied.

In

28, we provide a new method to certify that a nearby polynomial system has a singular isolated root and we compute its multiplicity structure. More precisely, given a polynomial system

, we present a Newton iteration on an extended deflated system that locally converges, under regularity conditions, to a small deformation of f such that this deformed system has an exact singular root. The iteration simultaneously converges to the coordinates of the singular root and the coefficients of the so-called inverse system that describes the multiplicity structure at the root. We use α-theory test to certify the quadratic convergence, and to give bounds on the size of the deformation and on the approximation error. The approach relies on an analysis of the punctual Hilbert scheme, for which we provide a new description. We show in particular that some of its strata can be rationally parametrized and exploit these parametrizations in the certification. We show in numerical experimentation how the approximate inverse system can be computed as a starting point of the Newton iterations and the fast numerical convergence to the singular root with its multiplicity structure, certified by our criteria.

In

26, we propose a Newton-type method to solve numerically the eigenproblem of several diagonalizable matrices, which pairwise commute. A classical result states that these matrices are simultaneously diagonalizable. From a suitable system of equations associated to this problem, we construct a sequence that converges quadratically towards the solution. This construction is not based on the resolution of a linear system as this is the case in the classical Newton method. Moreover, we provide a theoretical analysis of this construction and exhibit a condition to get a quadratic convergence. We also propose numerical experiments, which illustrate the theoretical results.

In 16 we study the equations of the elimination ideal associated with n+1 generic multihomogeneous polynomials defined over a product of projective spaces of dimension n. We first prove a duality property and then make this duality explicit by introducing multigraded Sylvester forms. These results provide a partial generalization of similar properties that are known in the setting of homogeneous polynomial systems defined over a single projective space. As an important consequence, we derive a new family of elimination matrices that can be used for solving zero-dimensional multiprojective polynomial systems by means of linear algebra methods.

A

A tri-linear rational map in dimension three is a rational map

We present algorithmic, complexity, and implementation results on the problem of sampling points from a spectrahedron, that is, the feasible region of a semidefinite program. Our main tool is geometric random walks. We analyze the arithmetic and bit complexity of certain primitive geometric operations that are based on the algebraic properties of spectrahedra and the polynomial eigenvalue problem. This study leads to the implementation of a broad collection of random walks for sampling from spectrahedra that experimentally show faster mixing times than methods currently employed either in theoretical studies or in applications, including the popular family of Hit-and-Run walks. The different random walks offer a variety of advantages, thus allowing us to efficiently sample from general probability distributions, for example the family of log-concave distributions which arise in numerous applications. We focus on two major applications of independent interest: (i) approximate the volume of a spectrahedron, and (ii) compute the expectation of functions coming from robust optimal control. We exploit efficient linear algebra algorithms and implementations to address the aforementioned computations in very high dimension. In particular, we provide a C++ open source implementation of our methods that scales efficiently, for the first time, up to dimension 200. We illustrate its efficiency on various data sets 19.

Determining the number of solutions of a multi-homogeneous polynomial system is a fundamental problem in algebraic geometry. The multi-homogeneous Bézout (m-Bézout) number bounds from above the number of non-singular solutions of a multi-homogeneous system, but its computation is a

By definition, a rigid graph in

(or on a sphere) has a finite number of embeddings up to rigid motions for a given set of edge length constraints. These embeddings are related to the real solutions of an algebraic system. Naturally, the complex solutions of such systems extend the notion of rigidity to

. A major open problem has been to obtain tight upper bounds on the number of embeddings in

, for a given number

of vertices, which obviously also bound their number in

. Moreover, in most known cases, the maximal numbers of embeddings in

and

coincide. For decades, only the trivial bound of

was known on the number of embeddings. Recently, matrix permanent bounds have led to a small improvement for

. This work improves upon the existing upper bounds for the number of embeddings in

, by exploiting outdegree-constrained orientations on a graphical construction, where the proof iteratively eliminates vertices or vertex paths. For the most important cases of

,

, the new bounds are

,

, respectively. In general, we improve the exponent basis in the asymptotic behavior with respect to the number of vertices of the recent bound mentioned above by the factor of

. Besides being the first substantial improvement upon a long-standing upper bound, our method is essentially the first general approach relying on combinatorial arguments rather than algebraic root counts

15.

The Canny-Emiris formula (1991) gives the sparse resultant as a ratio between the determinant of a Sylvester-type matrix and a minor of it, by a subdivision algorithm. The most complete proof of the formula was given by D'Andrea et al. in (2021) under general conditions on the underlying mixed subdivision. Before the proof, Canny and Pedersen had proposed (1992) a greedy algorithm which provides smaller matrices, in general. The goal of this paper is to give an explicit class of mixed subdivisions for the greedy approach such that the formula holds, and the dimensions of the matrices are reduced compared to the subdivision algorithm. We measure this reduction for the case when the Newton polytopes are zonotopes generated by n line segments (where n is the rank of the underlying lattice), and for the case of multihomogeneous systems. This ISSAC 2022 article comes with a JULIA implementation of the treated cases. More recent work includes an approach based on tropical geometry for describing the relevant subdivisions

33.

In

34, we consider nonlocal, nonlinear partial differential equations to model anisotropic dynamics of complex root sets of random polynomials under differentiation. These equations aim to generalise the recent PDE obtained by Stefan Steinerberger (2019) in the real case, and the PDE obtained by Sean O'Rourke and Stefan Steinerberger (2020) in the radial case, which amounts to work in 1D. These PDEs approximate dynamics of the complex roots for random polynomials of sufficiently high degree n. The unit of the time t corresponds to n differentiations, and the increment

corresponds to

. The general situation in 2D, in particular for complex roots of real polynomials, was not yet addressed. The purpose of this paper is to present a first attempt in that direction. We assume that the roots are distributed according to a regular distribution with a local homogeneity property (defined in the text), and that this property is maintained under differentiation. This allows us to derive a system of two coupled equations to model the motion. Our system could be interesting for other applications. The paper is illustrated with examples computed with the Maple system.

An interpolation problem is defined by a set of linear forms on the (multivariate) polynomial ring and values to be achieved by an interpolant. For Lagrange interpolation the linear forms consist of evaluations at some nodes, while Hermite interpolation also considers the values of successive derivatives. Both are examples of ideal interpolation in that the kernels of the linear forms intersect into an ideal. For an ideal interpolation problem with symmetry, we address in 30 the simultaneous computation of a symmetry adapted basis of the least interpolation space and the symmetry adapted H-basis of the ideal. Beside its manifest presence in the output, symmetry is exploited computationally at all stages of the algorithm. For an ideal invariant, under a group action, defined by a Groebner basis, the algorithm allows to obtain a symmetry adapted basis of the quotient and of the generators. We shall also note how it applies surprisingly but straightforwardly to compute fundamental invariants and equivariants of a reflection group.

For a finite group, we presented in 24 three algorithms to compute a generating set of invariants simultaneously to generating sets of basic equivariants, i.e., equivariants for the irreducible representations of the group. The main novelty resides in the exploitation of the orthogonal complement of the ideal generated by invariants; Its symmetry adapted basis delivers the fundamental equivariants. Fundamental equivariants allow to assemble symmetry adapted bases of polynomial spaces of higher degrees, and these are essential ingredients in exploiting and preserving symmetry in computations. They appear within algebraic computation and beyond, in physics, chemistry and engineering. Our first construction applies solely to reflection groups and consists in applying symmetry preserving interpolation, as developed by the same authors, along an orbit in general position. The fundamental invariants can be read off the H-basis of the ideal of the orbit while the fundamental equivariants are obtained from a symmetry adapted basis of an invariant direct complement to this ideal in the polynomial ring. The second algorithm takes as input primary invariants and the output provides not only the secondary invariants but also free bases for the modules of basic equivariants. These are constructed as the components of a symmetry adapted basis of the orthogonal complement, in the polynomial ring, to the ideal generated by primary invariants. The third algorithm proceeds degree by degree, determining the fundamental invariants as forming a H-basis of the Hilbert ideal, i.e., the polynomial ideal generated by the invariants of positive degree. The fundamental equivariants are simultaneously computed degree by degree as the components of a symmetry adapted basis of the orthogonal complement of the Hilbert ideal.

In

29, a construction of a globally G

family of Bézier surfaces, defined by smoothing masks approximating the well-known Catmull-Clark (CC) subdivision surface is presented. The resulting surface is a collection of Bézier patches, which are bicubic C

around regular vertices and biquintic G

around extraordinary vertices (and C

on their one-rings vertices). Each Bézier point is computed using a locally defined mask around the neighboring mesh vertices. To define G

conditions, we assign quadratic gluing data around extraordinary vertices that depend solely on their valence and we use degree five patches to satisfy these G

constraints. We explore the space of possible solutions, considering several projections on the solution space leading to different explicit formulas for the masks. Certain control points are computed by means of degree elevation of the C

scheme of Loop and Schaefer, while for others, explicit masks are deduced by providing closed-form solutions of the G1 conditions, expressed in terms of the masks. We come up with four different schemes and conduct curvature analysis on an extensive benchmark in order to assert the quality of the resulting surfaces and identify the ones that lead to the best result, both visually and numerically. We demonstrate that the resulting surfaces converge quadratically to the CC limit when the mesh is subdivided.

Randomized dimensionality reduction has been recognized as one of the cornerstones in handling high-dimensional data, originating in various foundational works such as the celebrated Johnson-Lindenstrauss Lemma. More specifically, nearest neighbor-preserving embeddings exist for L2 (Euclidean) and L1 (Manhattan) metrics, as well as doubling subsets of L2, where doubling dimension is today the most effective way of capturing intrinsic dimensionality, as well as input structure in various applications. These randomized embeddings bound the distortion only for distances between the query point and a point set. Motivated by the foundational character of fast Approximate Nearest Neighbor search in L1, this paper settles an important missing case, namely that of doubling subsets of L1. In particular, we introduce a randomized dimensionality reduction by means of a near neighbor-preserving embedding, which is related to the decision-with-witness problem. The input set gets represented with a carefully chosen covering point set; in a second step, the algorithm randomly projects the latter. In order to obtain the covering point sets, we leverage either approximate r-nets or randomly shifted grids, with different tradeoffs between preprocessing time and target dimension. We exploit Cauchy random variables, and derive a concentration bound of independent interest. Our algorithms are rather simple and should therefore be useful in practice 22.

Flux-aligned mesh generation plays an important role in the magnetohydrodynamic (MHD) simulation of Tokamak plasmas. In

31, we present the existence theory of flux-aligned meshes by generalized Morse theory to the situation in Tokamak simulation. A high-order algorithm is developed to validate the theory by generating flux-aligned quad meshes with the same topologies as the typical flux contours in JOREK for Tokamak configuration MAST.

The emergence of RGB-D cameras and the development of pose estimation algorithms offer opportunities in biomechanics. However, some challenges still remain when using them for gait analysis, including noise which leads to misidentification of gait events and inaccuracy. Therefore, in 23 we present a novel kinematic-geometric model for spatio-temporal gait analysis, based on ankles’ trajectory in the frontal plane and distance-to-camera data (depth). Our approach consists of three main steps: identification of the gait pattern and modeling via parameterized curves, development of a fitting algorithm, and computation of locomotive indices. The proposed fitting algorithm applies on both ankles’ depth data simultaneously, by minimizing through numerical optimization some geometric and biomechanical error functions. For validation, 15 subjects were asked to walk inside the walkway of the OptoGait, while the OptoGait and an RGB-D camera (Microsoft Azure Kinect) were both recording. Then, the spatiotemporal parameters of both feet were computed using the OptoGait and the proposed model. Validation results show that the proposed model yields good to excellent absolute statistical agreement (

Molecular dynamics simulation is a powerful technique for studying the structure and dynamics of biomolecules in atomic-level detail by sampling their various conformations in real time. Because of the long timescales that need to be sampled to study biomolecular processes and the big and complex nature of the corresponding data, relevant analyses of important biophysical phenomena are challenging. Clustering and Markov state models (MSMs) are efficient computational techniques that can be used to extract dominant conformational states and to connect those with kinetic information. In this work, we perform Molecular Dynamics simulations to investigate the free energy landscape of Angiotensin II (AngII) in order to unravel its bioactive conformations using different clustering techniques and Markov state modeling. AngII is an octapeptide hormone, which binds to the AT1 transmembrane receptor, and plays a vital role in the regulation of blood pressure, conservation of total blood volume, and salt homeostasis. To mimic the water–membrane interface as AngII approaches the AT1 receptor and to compare our findings with available experimental results, the simulations were performed in water as well as in water–ethanol mixtures. Our results show that in the water–ethanol environment, AngII adopts more compact U-shaped (folded) conformations than in water, which resembles its structure when bound to the AT1 receptor. For clustering of the conformations, we validate the efficiency of an inverted-quantized k-means algorithm, as a fast approximate clustering technique for web-scale data (millions of points into thousands or millions of clusters) compared to k-means, on data from trajectories of molecular dynamics simulations with reasonable trade-offs between time and accuracy. Finally, we extract MSMs using various clustering techniques for the generation of microstates and macrostates, and for the selection of the macrostate representatives 20.

Ioannis Emiris coordinates a research contract with the industrial partner ANSYS (Greece), in collaboration with Athena Research Center. MSc students P. Repouskos and T. Pappas, PhD candidate A. Chalkis and postdoc fellow I. Psarros are partially funded.

Electronic design automation (EDA) and simulating Integrated Circuits requires robust geometric operations on thousands of electronic elements (capacitors, resistors, coils etc) represented by polyhedral objects in 2.5 dimensions, not necessarily convex. A special case may concern axis-aligned objects but the real challenge is the general case. The project, extended into 2022, focuses on 3 axes: (1) efficient data structures and prototype implementations for storing the aforementioned polyhedral objects so that nearest neighbor queries are fast in the L-max metric, which is the primary focus of the contract, (2) random sampling of the free space among objects, (3) data-driven algorithmic design for problems concerning data-structures and their construction and initialization.

It is expected to continue into 2023 along with a grant from the Greek ministry of Development.

CIFRE collaboration between Schlumberger Montpellier (A. Azzedine) and Inria Sophia Antipolis (B. Mourrain). The PhD candidate is A. Belhachmi. The objective of the work is the development of a new spline based high quality geomodeler for reconstructing the stratigraphy of geological layers from the adaptive and efficient processing of large terrain information.

The aim of this project is to develop a mathematical framework for the integration of geometric modeling and simulation using spline-based finite elements of high degree of smoothness. High-order methods are known to provide a robust and efficient methodology to tackle complex challenges in multi-physics simulations, shape optimization, and the analysis of large-scale datasets arising in data-driven engineering and design. However, the analysis and design of high-order methods is a daunting task requiring a concurrent effort from diverse fields such as applied algebraic geometry, approximation theory and splines, topological data analysis, and computational mathematics. Our strategic vision is to create a research team combining uniquely broad research expertise in these areas by establishing a link between the AROMATH and Swansea University.

Nelly Villamizar and Beihui Yuan visited the Aromath team for a week in the frame of the project.

The research program focuses on the interaction between the computational side of geometric models, and their application-oriented side devoted to the design and analysis of efficient adaptive spline approximation schemes. The unified geometry processing framework promoted by the research project will enable a seamless integration of modern computational methods with flexible modeling and approximation schemes to provide accurate, efficient and robust numerical simulations. Towards this ambitious goal, the work plan addresses important challenges in the area of geometric modeling and processing, creating a connection with suitable machine learning applications.

Carlotta Giannelli and Sofia Imperatore visited the Aromath team for a week and for three months, respectively, in the frame of the project.

(POEMA project on cordis.europa.eu)

Non-linear optimization problems are present in many real-life applications and in scientific areas such as operations research, control engineering, physics, information processing, economy, biology, etc. However, efficient computational procedures, that can provide the guaranteed global optimum, are lacking for them. The project will develop new polynomial optimization methods, combining moment relaxation procedures with computational algebraic tools to address this type of problems. Recent advances in mathematical programming have shown that the polynomial optimization problems can be approximated by sequences of Semi-Definite Programming problems. This approach provides a powerful way to compute global solutions of non-linear optimization problems and to guarantee the quality of computational results. On the other hand, advanced algebraic algorithms to compute all the solutions of polynomial systems, with efficient implementations for exact and approximate solutions, were developed in the past twenty years.

The network combines the expertise of active European teams working in these two domains to address important challenges in polynomial optimization and to show the impact of this research on practical applications. The network will train a new squad of 15 young researchers to master high-level mathematics, algorithm design, scientific computation and software development, and to solve optimization problems for real-world applications. It will advance the research on algebraic methods for moment approaches, tackle mixed integer non-linear optimization problems and enhance the efficiency and robustness of moment relaxation methods. Specific applications of these approaches to optimization problems are related to smarter cities challenges, such as water distribution network management, energy flow in power systems, urban traffic management, as well as to oceanography and environmental monitoring and finance.

(GRAPES project on cordis.europa.eu)

GRAPES aims at considerably advancing the state of the art in Mathematics, Computer-Aided Design, and Machine Learning in order to promote game changing approaches for generating, optimising, and learning 3D shapes, along with a multisectoral training for young researchers. Recent advances in the above domains have solved numerous tasks concerning multimedia and 2D data. However, automation of 3D geometry processing and analysis lags severely behind, despite their importance in science, technology and everyday life, and the well-understood underlying mathematical principles. The CAD industry, although well established for more than 20 years, urgently requires advanced methods and tools for addressing new challenges.

The scientific goal of GRAPES is to bridge this gap based on a multidisciplinary consortium composed of leaders in their respective fields. Top-notch research is also instrumental in forming the new generation of European scientists and engineers. Their disciplines span the spectrum from Computational Mathematics, Numerical Analysis, and Algorithm Design, up to Geometric Modelling, Shape Optimisation, and Deep Learning. This allows the 15 PhD candidates to follow either a theoretical or an applied track and to gain knowledge from both research and innovation through a nexus of intersectoral secondments and Network-wide workshops.

Horizontally, our results lead to open-source, prototype implementations, software integrated into commercial libraries as well as open benchmark datasets. These are indispensable for dissemination and training but also to promote innovation and technology transfer. Innovation relies on the active participation of SMEs, either as a beneficiary hosting an ESR or as associate partners hosting secondments. Concrete applications include simulation and fabrication, hydrodynamics and marine design, manufacturing and 3D printing, retrieval and mining, reconstruction and visualisation, urban planning and autonomous driving.

GdR EFI and GDM: Evelyne Hubert is part of the Scientific Committee of the GdR Equations Fonctionnelles et Interactions and participates to the GdR Géometrie Differentielle et Mécanique (gdr-gdm.univ-lr.fr).

Mehran Hatamzadeh participated to European Researchers’ Night (ERN) on Friday 30 September 2022 from 18:00 to 23:00, at Campus Valrose – Nice, France.