The goal of the Mocqua team is to tackle challenges coming from the emergence of new or future computational models. The landscape of computational models has indeed changed drastically in the last few years: the complexity of digital systems is continually growing, which leads to the introduction of new paradigms, while new problems arise due to this larger scale (tolerance to faulty behaviors, asynchronicity) and constraints of the present world (energy limitations). In parallel, new models based on physical considerations have appeared. There is thus a real need to accompany these changes, and we intend to investigate these new models and try to solve their intrinsic problems by computational and algorithmic methods.

While the bit remains undeniably the building block of computer architecture and software, it is fundamental for the development of new paradigms to investigate computations and programs working with inputs that cannot be reduced to finite strings of 0's and 1's. Our team focuses on a few instances of this phenomenon: programs working with qubits (quantum computing), programs working with functions as inputs (higher-order computation) and programs working in infinite precision (real numbers, infinite sequences, streams, coinductive data, ...).

In the Mocqua team, we address problems that can lie at the interface with physics, biology, or mathematics. We employ tools and methods originating from computer science, that we sometimes enrich through these interdisciplinary interactions.

Mocqua is structured around three models: Quantum Computing, Higher-Order Computing and Computing with infinite precision. The last term is arguably quite large in scope and will mostly stand here for dynamical systems. While quantum computing and higher-order computing are decidedly different, it turns out that similar techniques can be used to answer their specific problems.

These three models are discussed more precisely in the following section.

The field of quantum computing is currently experiencing a rapid growth, at the same time on the experimental physics and hardware side as well as the theoretical side, involving physics, mathematics and computer science. At the creation of Mocqua in 2018, only a handful of very primitive programmable quantum computer prototypes existed in the world. Whereas today, many academic and industrial groups try to build and operate them. These prototypes are still only very small scale and very noisy without any systematic error correction applied. Most of them differ fundamentally on the hardware substrate, and it is quite hard to predict which solution will take the lead when scaling up.

The landscape of quantum programming languages is also constantly evolving. Comparably to compiler design, the foundation of quantum software therefore relies on an intermediate representation that is suitable for manipulation, easy to produce from software, and easily encodable into hardware. The languages of choice for this are the historical and ubiquitous quantum circuit model, and the more recent, flexible and powerful ZX-calculus.

Regardless of the models that will take the lead, the hurdles into scaling up quantum computers from a few noisy qubits to many almost noiseless qubits remain similar. Understanding how to use near- and mid-term devices; how to implement quantum error correction and fault-tolerant operations; how to compile for and program large-scale quantum computers all stand at the heart of the challenge.

While programs often operate on natural numbers or finite structures such as graphs or finite strings, they can also take functions as input. In that case, the program is said to perform higher-order computations, or to compute a higher-order functional. Functional programming or object-oriented programming are important paradigms allowing higher-order computations.

While computability and complexity theories are well developed for first-order
programs, difficulties arise when dealing with higher-order
programs. There are many non-equivalent ways of presenting inputs to
such programs: an input function can be presented as a black box,
encoded in an infinite binary sequence, or sometimes by a finite
description. Comparing those representations, both from complexity and computability perspectives, is an important
problem. A particularly useful application of higher-order
computations is to compute with infinite objects that can be
represented by functions or symbolic sequences. The theory is well understood in many cases (to be precise, when these objects live in a topological space with a countable basis), but is not well understood in other interesting cases. For instance, when the inputs are second-order functionals (of type

The most natural example of a computation with infinite precision is
the evolution of a dynamical system.
The underlying space might be

From the point of view of computation, the main point of interest is the link between the long-term behavior of a system and its initial configuration. There are two questions here: (a) predict the behavior, (b) design dynamical systems with some prescribed behavior. We mainly examine the first one through the angle of reachability and more generally control theory for hybrid systems.

The model of cellular automata is of particular interest. This computational model is relevant for simulating complex global phenomena which emerge from local interactions between simple components. It is widely used in various natural sciences (physics, biology, etc.) and in computer science, as it is an appropriate model to reason about errors that occur in systems with a great number of components.

The simulation of a physical dynamical system on a computer is made difficult by various aspects. First, the parameters of the dynamical systems are seldom exactly known. Secondly, the simulation is usually not exact: real numbers are usually represented by floating-point numbers, and simulations of cellular automata only simulate the behavior of finite or periodic configurations.

Quantum computing is currently the most promising technology to extend Moore's law, whose end is expected to be reached soon with engraving technologies struggling to reduce transistor size. Thanks to promising algorithmic and complexity theoretic results on its computational power, quantum computing will represent a decisive competitive advantage for those who will control it.

Quantum computing is also a major security issue, since it allows us to break today's asymmetric cryptography. Hence, mastering quantum computing is also of the highest importance for national security concerns. Small-scale quantum computers already exist and recent scientific and technical advances suggest that the construction of the first practical quantum computers will be possible in the coming years.

As a result, the major US players in the IT industry have embarked on a dramatic race, mobilizing huge resources: IBM, Microsoft, Google and Intel have each invested huge sums of money, and are devoting significant budgets to attract and hire the best scientists on the planet. Some states have launched ambitious national programs, including United Kingdom, Netherlands, Canada, China, Australia, Singapore, and very recently the European Union, with the 10-year FET Flagship program in Quantum Engineering. The French government also recently announced its Plan Quantique – a 1.8 billion euros initiative to develop quantum technologies.

An important pillar of the Plan Quantique concerns the development of Large Scale Quantum computers. This will come with progress all across the quantum stack.

The Mocqua team contributes to the computer science approach to quantum computing, with expertise ranging all across the quantum stack from quantum software to fault tolerance and quantum error correction. We aim at a better understanding of the power and limitations of the quantum computer, and therefore of its impact on society. We also contribute to ease the development of the quantum computer by filling gaps across the quantum stack from programming languages to compilation and intermediate representations for faul-tolerant implementations on hardware.

The idea of considering functions as first-class citizens and allowing programs to take functions as inputs has emerged since the very beginning of theoretical computer science through Church's

One of the central problems is to design programming languages that capture most of, if not all, the possible ways of computing with functions as inputs. There is no Church thesis in higher-order computing and many ways of taking a function as input can be considered: allowing parallel or only sequential computations, querying the input as a black-box or via an interactive dialog, and so on.

The Kleene-Kreisel computable functionals are arguably the broadest class of higher-order continuous functionals that could be computed by a machine. However their complexity is such that no current programming language can capture all of them. Better understanding this class of functions is therefore fundamental in order to identify the features that a programming language should implement to make the full power of higher-order computation expressible in such a language.

Higher-order computing provides a model for computations involving real numbers and other mathematical objects that cannot be finitely represented. Indeed, such infinite objects can be encoded as functions or streams of bits, which can then be given as inputs to a higher-order program. This method raises many questions, such as the impact of the encoding on the solvability and complexity of problems, and its relationship with the mathematical structures underlying the spaces of objects, such as a topology or a partial order.

We aim at developing various tools to simulate and analyse the dynamics of spatially-extended discrete dynamical systems such as cellular automata. The emphasis of our approach is on the evaluation of the robustness of the models under study, that is, their capacity to resist various perturbations.

In the framework of pure computational questions, various examples of such systems have already been proposed for solving complex problems with a simple bio-inspired approach (e.g. the decentralized gathering problem 51). We are now working on their transposition to various real-world situations. For example when one needs to understand the behaviour of large-scale networks of connected components such as wireless sensor networks. In this direction of research, a first work has been presented on how to achieve a decentralized diagnosis of networks made of simple interacting components and the results are rather encouraging 52. Nevertheless, there are various points that remain to be studied in order to complete this model for its integration in a real network.

We have also tackled the evaluation of the robustness of a swarming model proposed by A. Deutsch to mimic the self-organization process observed in various natural systems (birds, fishes, bacteria, etc.) 4. We now wish to develop our simulation tools to apply them to various biological phenomena where many agents are involved.

We are also currently extending the range of applications of these techniques to the field of economy. We have started a collaboration with Massimo Amato, a professor in economy at the Bocconi University in Milan. Our aim is to propose a decentralized view of a business-to-business market and totally decentralized, agent-oriented models of such markets. Various banks and large businesses have already expressed their interest in such modeling approaches.

The main footprint of the research activities of the team is due the attendance of scientific events. We give preference to participation by videoconference or to travel by train for events in Europe.

Given our topics of research, their environmental impact is modest. However, we have cooperated in the recent past with EDF though a CIFRE PhD on quantum algorithms for optimisation problems with applications in fleet electric vehicle charging.

Quantum software is crucial in the development of the quantum computer. In the Mocqua team, we contribute to the development of the quantum stack with several complementary results, from high level programming languages, to models of quantum computation, through quantum circuits and error correcting codes.

Quantum programming languages are essential for developing new algorithms and actually using the quantum computer. We have two contributions described in this section, based on the introduction of two languages: Qimaera dedicated to some particular classes of quantum algorithms, including variational quantum algorithms, a particular promising family of quantum algorithms for short terms applications; The second contribution is the FOQ language which, in the implicit complexity framework, captures quantum polynomial time, so roughly speaking a FOQ program is guaranteed by construction to run in polynomial time on a quantum computer.

Variational quantum programming in Idris.
Variational Quantum Algorithms 58, 55, 49 are hybrid
classical-quantum algorithms where classical and quantum computation work in
tandem to solve computational problems. These algorithms create interesting
challenges for the design of suitable programming languages, because they have
to be able to accommodate both classical and quantum programming primitives
simultaneously.

As part of this research project, we develop a set of libraries for the Idris 2 programming language that enable the programmer to implement (variational) quantum algorithms where the full power of the elegant Idris language works in synchrony with quantum programming primitives that we introduce. The two key ingredients of Idris that make this possible are (1) dependent types which allow us to implement unitary (i.e. reversible and controllable) quantum operations; and (2) linearity which allows us to enforce fine-grained control over the execution of quantum operations that ensures compliance with the laws of quantum mechanics. We demonstrate that our libraries, named Qimaera, are suitable for variational quantum programming by providing implementations of the two most prominent variational quantum algorithms – QAOA 49 and VQE 58. To the best of our knowledge, this is the first implementation of these algorithms that has been achieved in a type-safe framework.

The results of this work were presented in 2023 in the conference ESOP 19. The software is open-source, available
under the MIT license here. These
results were obtained during the (bachelor) internship of Liliane-Joy Dandy in
our team and she was awarded the "Research Internship Prize" at École
Polytechnique for her work.

Characterization of quantum polynomial time.
In 23, we introduce a First-Order Quantum programming language, hence named FOQ, whose terminating programs are reversible. We restrict FOQ to a strict and tractable subset – called PFOQ for polynomial FOQ – of terminating programs with bounded width, that provides a first programming language-based characterization of the quantum complexity class FBQP. FBQP is the functional extension of the class BQP known as Bounded-error Quantum Polynomial time, the class of decision problems that can be solved by a polynomial time quantum Turing machine with probability greater than

The quantum circuit model is the most standard model of quantum computing. Quantum circuits are ubiquitous in quantum computing, serving as both a low-level language and, surprisingly, a higher-level language used to describe certain quantum algorithms. We have introduced the first complete equational theory for quantum circuits, for reasoning on quantum circuits; we have also introduced new techniques for quantum circuit optimisation.

Completeness for Quantum Circuits.
With the current advances in quantum technologies and quantum software, it is essential to develop formalisms for transforming and reasoning about quantum circuits. This is crucial for optimizing their size or depth, adapting code to architectural constraints, making it fault-tolerant, or verifying the equivalence of two circuits.

To achieve these goals, quantum circuits can be equipped with equational theories that enable the transformation of circuits using rules that are preferably simple and intuitive. These rules allow the replacement of a circuit fragment with an equivalent circuit. An equational theory is considered complete when, for any pair of circuits representing the same quantum evolution, there is a way to transform one into the other using only the rules of the equational theory.

We have introduced the first complete equational theory for quantum circuits 18, a problem which was open for more than 30 years. Indeed the only fragments equipped with a complete equational theory before, were non-universal and efficiently classically simulatable 59, 43, 44, 53. This result has been presented at LICS'23. We have obtained the completeness for quantum circuits through a non-trivial completion procedure based on the LOv-calculus 47 for which we have introduced a complete equational theory recently. This result has been obtained in collaboration with the Quacs Inria team in Saclay and the start-up Quandela.

This year, we have also simplified and generalised this equational theory to quantum circuits involving ancillary qubits and quantum measurements 17. This result will be presented at CSL'24. Finally, we have recently a minimal equational theory for vanilla quantum circuits (i.e. the standard model of quantum circuits without ancillary qubits) that we have proved to be minimal, i.e. each rule is necessary for the completeness 36. One of the main and original contributions of this paper is demonstrating that the use of a rule acting on an unbounded number of qubits is necessary for completeness.

Hadamard count minimization in Clifford+T circuits.
We have introduced an algorithm which
essentially optimally minimizes the number of Hadamard in a Clifford+T circuit.
When compiling quantum circuit with the gate set Clifford+T, one usually tries to minimize the number of T gates.
This is because T gates are usually more costly to implement fault-tolerantly.
This minimization problem when there are no Hadamard gates in the circuit is well understood.
To handle the general case one can optimize around Hadamard gates or use some gadgetisation techniques.
In both cases minimizing the number of Hadamard gate in the circuit beforehand can also help reducing the number of T gates overall.

This work has been done in collaboration with Simon Martiel (Atos), and has been accepted in the ACM journal Transactions on Quantum Computing 40 and has been presented at the prestigious conference TQC'23 24.

Quantum error correcting codes are crucial in the quest of a fault-tolerant large-scale quantum computer. We have contributed to the development of surface codes, we have also introduced a new family of quantum codes, the quantum rotor codes, finally and may be more surprisingly, we have shown that minimalist error correcting codes can be used in near-term experiments for demonstrating a separation between classical and quantum computers.

Fault-tolerant Clifford gates on toric codes.
Quantum error correcting codes with good encoding rates promise to reduce the cost of fault-tolerant quantum computation by reducing the number of physical qubits needed for a target level of protection and number of logical qubits needed.
The savings come at the price of more complex procedures for the implementation of gates on logical qubits within the code.
One technique for homological codes from 2D manifold is to apply a Dehn twist to a handle of the surface which implements a CNOT gate between the logical qubits of the handle.
Alexandre Guernut for his PhD studied the generalization of Dehn-twists to other 2D codes, namely color codes.
We discovered that in practice for small system size the color code Dehn twists were spreading too much the noise.
Therefore we switched gears by unfolding the color code to two copies of the toric code and set out to design and evaluate the performance of a generating set of the Clifford group on toric codes that would have a constant-depth implementation.
The numerical results are now promising and a paper is in preparation.

Quantum rotor codes.
The work 41 is a collaboration in progress with Barbara Terhal (Delft University) and Allessandro Ciani (Forschungszentrum Jülich).
Quantum systems in real laboratories do not always consist in a set of qubits but often systems with a richer structure for their Hilbert space.
For instance they are often infinite dimensional, as are quantum oscillators or quantum rotors.
Exploiting the structure and knowledge of the full physical system when designing quantum error correcting codes is a promising way of reducing the overhead of error correction.
Quantum error correction with quantum oscillators has been well studied either for encoding qubits within quantum oscillators (the field of bosonic codes) or encoding oscillators in several oscillators for which several no-gos have been proven.
Quantum rotors can be thought as intermediate systems between qubits (finite) and quantum oscillators (infinite and continuous).
We are studying quantum error correcting codes for quantum rotors both encoding finite or infinite logical information.
The codes we defined encoding finite systems have some similarity with so-called protected superconducting qubits such as the 0-

Robust Sparse IQP Sampling in Constant depth.
In collaboration with the Inria teams Quantic and Cosmiq we studied the problem of making the sparse instantaneous quantum polynomial-time (IQP) sampling problem robust to noise and constant depth, therefore making it more accessible to near-term experiments 26.
Sparse IQP sampling problems are problems that are demonstrably hard to sample from with a classical computer but straightforward for a quantum computer and hence candidates for demonstration of quantum advantage.
The problem is that convincing advantage needs to be able to scale up the quantum circuit which cannot be done in the presence of noise.
This work shows how to use the minimal amount of quantum error correction necessary to be able to scale while correcting errors.
This work has been accepted for a plenary talk at the QIP2024 conference.

There are various models of quantum computation. Whereas unitary evolutions are at the heart of the standard model of quantum computing, measurement-based quantum computing (MBQC) is an alternative model introduced more than 20 years ago, which consists in performing quantum measurements over a large entangled initial resource called a graph state. We have contributed to the development of MBQC and also to a recent graph-state-based protocol called

Measurement-based quantum computing on qudit-graph states.
In measurement-based quantum computing (MBQC), computation is carried out by a sequence of measurements and corrections on an entangled state. Flow, and related concepts that we have contributed to develop in the past decades 56, 57, are powerful techniques for characterising the dependence of the corrections on previous measurement outcomes. We have introduced flow-based methods for MBQC with qudit graph states when the local dimension of the entangled state is an odd prime. Our main results are a proof that
such flow is a necessary and sufficient condition for a strong form of determinism. This work has been done in collaboration with Aleks Kissinger (Oxford), Damian Markham (LIP6), Clément Meignant (LIP6) and Robert Booth who did a joint PhD in the Mocqua team and LIP6 and who is now postdoc in Edinburgh. This paper has been publish in Journal of Physics A: Mathematical and Theoretical 13.

Small pairable states.
A

Our main results on Axis 2 are on the computability over topological spaces, as well as a characterisation of polynomial time in object-oriented programming languages. Notice that the results on the characterisation of quantum polynomial time, presented in the previous section are at the intersection of axis 1 and 2, and could have been presented in this section.

We have studied the solvability of the following problem: given a set in a fixed space, produce a point in that set. This problem was studied in 46 for various spaces. It was proved that for a restricted class of spaces, this problem is solvable precisely when the space is in some sense complete. We have shown that this characterization fails for general spaces but holds under a natural modification of the problem, by allowing more sets as inputs. The results are published in 16.

We investigated the relationship between the computability of sets and their topological properties. More precisely, we are studying which sets have “computable type”, which is the property that any algorithm that semidecides the set can be converted into an algorithm that fully decides the set. We have shown that this property is equivalent to the fact that the set satisfies an invariant of low complexity, and such that no proper subset satisfies this invariant. For instance, the circle is minimal satisfying the invariant “having a hole”. This result unifies many previous results, implies new results and opens up a new research direction: finding invariants of low complexity. The result is published in 11.

In the study of computable type, one also wants to show that a set does not have computable type, by producing a copy of the set which is semicomputable but not computable. This task can be difficult, especially if the set is not expliticly given but is defined in an implicit way. We have obtained a very general result which characterizes when two notions of computability are not equivalent, using topological arguments. This gives a much simpler way of separating two computability notions, reducing it to a comparison between two topologies. The result is published in 10.

To a topological space is associated the problem of producing a presentation of this space. We have investigated the possible degrees of difficulty of this problem. This study requires the understanding of the algorithmic complexity of detecting various topological invariants, notably the ones coming from homology and which count the holes in the space. The results are published in 15.

In 22, we introduce a new noninterference policy to capture the class of functions computable in polynomial time on an object-oriented programming language. This policy makes a clear separation between the standard noninterference techniques for the control flow and the layering properties required to ensure that each "security" level preserves polynomial time soundness, and is thus very powerful as for the class of programs it can capture. This new characterization is a proper extension of existing tractable characterizations of polynomial time based on safe recursion. Despite the fact that this noninterference policy is

Regarding Axis 3 of the team, we have contributions on probabilistic cellular automata, on probabilistic and enumerative combinatorics, and also in analysis of graphs in the field of economics. The latter have been developed in the context of the exploratory research action Murene.

The decentralised diagnosis problem consists in the detection of a certain amount of defects in a distributed network. We continued our work with Régine Marchand (IECL, Université de Lorraine) and Irène Marcovici (now at Université de Rouen) 20. We tackled this problem in the context of two-dimensional cellular automata with three states: given a threshold of defects to detect, we want the alert state to coexist with the neutral state when this density is above the threshold, we want the alert state to invade the whole grid. We presented two probabilistic rules to solve this problem. The first one is isotropic and is studied with numerical simulations. The second one is defined on Toom's neighbourhood and is examined with an analytical point of view. These solutions constitute a first step towards a broader study of the decentralised diagnosis problem on more general networks.

Baxter Tree-like tableaux.
The article 12 is the result of a collaboration of M. Bouvel started at LaBRI in 2012 (when she was there), with J.-C. Aval, A. Boussicault, O. Guibert and M. Silimbani.
It has been completed and submitted in 2021, but has then been largely revised and published in 2023.

Tree-like tableaux are objects in bijection with alternative or permutation tableaux, which are classical objects in combinatorics. Our work defines and studies a new subclass of tree-like tableaux enumerated by the famous Baxter numbers. We exhibit simple bijective links between these objects and three other combinatorial classes: (packed or mosaic) floorplans, twisted Baxter permutations and triples of non-intersecting lattice paths. From several (and unrelated) works, these last objects are already known to be enumerated by Baxter numbers, and our main contribution is to provide a unifying approach to bijections between Baxter objects, where Baxter tree-like tableaux play the key role.

Enumeration of pattern-avoiding inversion sequences.
The results presented here have been obtained by Benjamin Testart during the first year of his PhD thesis.
They are concerned with inversion sequences, which are integer sequences

In his 2022 preprint, Benjamin solves the final case by making use of a decomposition of inversion sequences avoiding the pattern 010 according to original parameters. The method is then expanded to solve the enumeration of inversion sequences avoiding several pairs of patterns containing 010, most of the time solving also the enumeration of some family of constrained words as an auxiliary problem.

In a paper that is currently being written, Benjamin in addition managed to obtain all missing enumerations for inversion sequences avoiding a pair of patterns of size 3 (17 such families in total). To achieve this, Benjamin has used in a clever way the (established) method of generating trees in a few cases, and has otherwise used several decompositions of inversion sequences that he introduced. This work will be submitted soon to a good journal in combinatorics.

Convergence law for 231-avoiding permutations.
In earlier work with Michael Albert (University of Otago) and Valentin Féray (IECL, Université de Lorraine), we compared the expressibility of two logics on permutations, called TOOB (theory of one bijection, seeing permutations as a bijection) and TOTO (theory of two orders, seeing permutations as a pair of total orders).
In the paper 9 (recently accepted for publication in DMTCS, in the special issue following the conference Permutation Patterns 2023),
we focus on TOTO, and study a different problem.
Namely, we investigate the existence of 0/1 or convergence laws when the domain is restricted to families of permutations avoiding patterns, similarly to a classical approach in the study of graphs.

Specifically, we prove that the class of 231-avoiding permutations satisfies a convergence law (but not a 0/1 law). In other words, for any first-order sentence

Scaling limits of families of intersection graphs.
This is the latest result of a collaboration of M. Bouvel with Frédérique Bassino (LIPN, Université Paris Nord), Valentin Féray (IECL, Université de Lorraine), Lucas Gerin (CMAP, École Polytechnique) and Adeline Pierrot (LISN, Université Paris-Sud) which started over ten years ago.
The purpose of this collaboration is to establish limit shape results for combinatorial structures (like permutations or graphs) constrained by the avoidance of substructures, often using methods from analytic combinatorics (which is original in the landscape of the research on this topic).

In an almost finished paper, we obtain the scaling limits of random graphs drawn uniformly in three families of intersection graphs: permutation graphs, circle graphs, and unit interval graphs. The two first families typically generate dense graphs (with a quadratic number of edges), and in these cases we prove almost sure convergence to an explicit deterministic graphon. Uniform unit interval graphs are nondense and we prove convergence in the sense of Gromov–Prokhorov after normalization of the distances.

Decomposition of order types, with applications to counting problems.
This topic, at the interface of combinatorics and discrete geometry, has emerged as the result of a collaboration between several teams in Nancy, and involves M. Bouvel, V. Feray (IECL, Université de Lorraine), X. Goaoc (Gamble) and F. Koechlin (post-doc with X. Goaoc until September 2023).
In this undergoing work, we introduce and study an original notion of decomposition of planar point sets (or rather of their chirotopes, also called order types) as trees decorated by smaller chirotopes.
This decomposition is based on the concept of mutually avoiding sets, and adapts in some sense the modular decomposition of graphs (or its cousin the substitution decomposition of permutations) in the world of chirotopes.
We prove that the associated tree always exists and is unique up to some appropriate constraints. We also show how to compute the number of triangulations of a chirotope efficiently, starting from its tree and the (weighted) numbers of triangulations of its parts.

In collaboration with Massimo Amato and Lucio Gobbi (Bocconi University and University of Trento), we developed some economic and operational foundations of a new method of financing companies’ financial obligations 50. In this new banking business model, a network funder sets an optimal combination of netting and financing. Given a network of companies and their respective invoices, and under the condition of a full settlement of the invoices, we applied a multilateral netting algorithm to the network, conceived as an oriented multi-graph. Our problem, which is NP-complete, was to find a set of invoices which maximises the amount of debt reduced given a quantity of loanable funds.

To consolidate our approach of the problem, we analysed the data of a large economic network from an Italian invoice operator on a one-year span 21. We compared different methods to detect structures or communities that could be helpful for debt netting algorithms. Indeed, the structure of such networks is not currently well known. We gave hints on how to sort and identify the type of business-to-business invoice graphs, in particular, how to identify relevant communities in such networks.

The team is currently supervising one CIFRE PhD in collaboration with industry partners (Atos/Eviden) which started in 2021. Vivien Vandaele is working on “Optimisation du calcul quantique tolérant aux fautes par le ZX-Calculus” under the supervision of Simon Perdrix and Christophe Vuillot from the team, and initially Simon Martiel, formerly from ATOS now moved to IBM replaced by Cyril Allouche from ATOS.