The goal of the Mocqua team is to tackle challenges coming from the emergence of new or future computational models. The landscape of computational models has indeed changed drastically in the last few years: the complexity of digital systems is continually growing, which leads to the introduction of new paradigms, while new problems arise due to this larger scale (tolerance to faulty behaviors, asynchronicity) and constraints of the present world (energy limitations). In parallel, new models based on physical considerations have appeared. There is thus a real need to accompany these changes, and we intend to investigate these new models and try to solve their intrinsic problems by computational and algorithmic methods.

While the bit remains undeniably the building block of computer architecture and software, it is fundamental for the development of new paradigms to investigate computations and programs working with inputs that cannot be reduced to finite strings of 0's and 1's. Our team will focus on a few instances of this phenomenon: programs working with qubits (quantum computing), programs working with functions as inputs (higher-order computation) and programs working in infinite precision (real numbers, infinite sequences, streams, coinductive data, ...).

While it can be argued that the quantum revolution has already happened in cryptography 47 or in optics 46, quantum computers are far from becoming a common commodity. This is despite the fact that many teams worldwide, both academic and industrial, are nowadays focusing much of their efforts on building quantum computers. Indeed the challenges ahead to reach practical, sizable and accurate quantum computers remain tremendous.

Today's quantum devices are small in scale and still very noisy and are therefore called NISQ devices (for Noisy Intermediate Scale Quantum). They all differ fundamentally on the hardware substrate, and it is quite hard to predict which solution will finally be adopted. While some effort is underway to understand and utilize the potential of these NISQ devices, scaling up and implementing fault-tolerant and quantum error correction schemes will eventually become crucial for the potential of quantum computing to be reached.

As these devices are developed and scale up, the importance of software to operate them and programming languages to program them will grow. The practical applications in sight will require tighter interactions within the quantum stack, which extends from hardware to algorithms. Given its recent emergence, the landscape of quantum programming languages is constantly evolving. Comparably to compiler design, the foundation of quantum software therefore relies on an intermediate representation that is suitable for manipulation, easy to produce from software and easily encodable into hardware. A graphical language now firmly established as the choice for this is the ZX-calculus.

Many research questions are now to be addressed. For instance, what are the correct formalism and approaches for quantum programming languages? How to develop practical, and useful algorithms? What role can graphical intermediate representations such as ZX-calculus play in interaction between compilers and hardware with different characteristics, like lattice surgery fault-tolerance or quantum optics? Which quantum error correcting codes and fault-tolerant schemes can make large scale quantum computing reachable?

While programs often operate on natural numbers or finite structures such as graphs or finite strings, they can also take functions as inputs. In that case, the program is said to perform higher-order computations, or to compute a higher-order functional. Functional programming or object-oriented programming are important paradigms allowing higher-order computations.

While the theory of computation is well developed for first-order programs, difficulties arise when dealing with higher-order programs. There are many non-equivalent ways of presenting inputs to such programs: an input function can be presented as a black-box, encoded in an infinite binary sequence, or sometimes by a finite description. Comparing those representations is an important problem. A particularly useful application of higher-order computations is to compute with infinite objects that can be represented by functions or symbolic sequences. The theory works well in many cases (to be precise, when these objects live in a topological space with a countable basis 59), but is not well understood in other interesting cases. For instance, when the inputs are the second-order functionals (of type

The most natural example of a computation with infinite precision is
the simulation of a dynamical system.
The underlying space might be

From the point of view of computation, the main point of interest is the link between the long-term behavior of a system and its initial configuration. There are two questions here: (a) predict the behavior, (b) design dynamical systems with some prescribed behavior. The first will be mainly examined through the angle of reachability and more generally control theory for hybrid systems.

The model of cellular automata will be of particular interest. This computational model is relevant for simulating complex global phenomena which emerge from simple interactions between simple components. It is widely used in various natural sciences (physics, biology, etc.) and in computer science, as it is an appropriate model to reason about errors that occur in systems with a great number of components.

The simulation of a physical dynamical system on a computer is made difficult by various aspects. First, the parameters of the dynamical systems are seldom exactly known. Secondly, the simulation is usually not exact: real numbers are usually represented by floating-point numbers, and simulations of cellular automata only simulate the behavior of finite or periodic configurations. For some chaotic systems, this means that the simulation can be completely irrelevant.

Quantum Computing is currently the most promising technology to extend Moore's law, whose end is expected to be reached soon with engraving technologies struggling to reduce transistor size. Thanks to promising algorithmic and complexity theoretic results on its computational power, quantum computing will represent a decisive competitive advantage for those who will control it.

Quantum Computing is also a major security issue, since it allows us to break today's asymmetric cryptography. Hence, mastering quantum computing is also of the highest importance for national security concerns. Small-scale quantum computers already exist and recent scientific and technical advances suggest that the construction of the first practical quantum computers will be possible in the coming years.

As a result, the major US players in the IT industry have embarked on a dramatic race, mobilizing huge resources: IBM, Microsoft, Google and Intel have each invested huge sums of money, and are devoting significant budgets to attract and hire the best scientists on the planet. Some states have launched ambitious national programs, including the United Kingdom, the Netherlands, Canada, China, Australia, Singapore, and very recently Europe, with the 10-year FET Flagship program in Quantum Engineering. The French government also recently announced its Plan Quantique – a 1.8 billion euros initiative to develop quantum technologies.

An important pillar of the Plan Quantique concerns the development of Large Scale Quantum computers. This will come with progress all across the quantum stack.

The Mocqua team contributes to the computer science approach to quantum computing, with expertise ranging all across the quantum stack from quantum software to fault-tolerance and quantum error-correction. We aim at a better understanding of the power and limitations of the quantum computer, and therefore of its impact on society. We also contribute to ease the development of the quantum computer by filling gaps across the quantum stack from programming languages to compilation and intermediate representations for fault-tolerant implementations on hardware.

The idea of considering functions as first-class citizens and allowing programs to take functions as inputs has emerged since the very beginning of theoretical computer science through Church's

One of the central problems is to design programming languages that capture most of, if not all, the possible ways of computing with functions as inputs. There is no Church thesis in higher-order computing and many ways of taking a function as input can be considered: allowing parallel or only sequential computations, querying the input as a black-box or via an interactive dialog, and so on.

The Kleene-Kreisel computable functionals are arguably the broadest class of higher-order continuous functionals that could be computed by a machine. However their complexity is such that no current programming language can capture all of them. Better understanding this class of functions is therefore fundamental in order to identify the features that a programming language should implement to make the full power of higher-order computation expressible in such a language.

We aim at developing various tools to simulate and analyse the dynamics of spatially-extended discrete dynamical systems such as cellular automata. The emphasis of our approach is on the evaluation of the robustness of the models under study, that is, their capacity to resist various perturbations.

In the framework of pure computational questions, various examples of such systems have already been proposed for solving complex problems with a simple bio-inspired approach (e.g. the decentralized gathering problem 52). We are now working on their transposition to various real-world situations. For example when one needs to understand the behaviour of large-scale networks of connected components such as wireless sensor networks. In this direction of research, a first work has been presented on how to achieve a decentralized diagnosis of networks made of simple interacting components and the results are rather encouraging 54. Nevertheless, there are various points that remain to be studied in order to complete this model for its integration in a real network.

We have also tackled the evaluation of the robustness of a swarming model proposed by A. Deutsch to mimic the self-organization process observed in various natural systems (birds, fishes, bacteria, etc.) 2. We now wish to develop our simulation tools to apply them to various biological phenomena where many agents are involved.

We are also currently extending the range of applications of these techniques to the field of economy. We have started a collaboration with Massimo Amato, a professor in economy at the Bocconi University in Milan. Our aim is to propose a decentralized view of a business-to-business market and totally decentralized, agent-oriented models of such markets. Various banks and large businesses have already expressed their interest in such modelling approaches.

The main footprint of the research activities of the team is due the attendance of scientific events. We give preference to participation by videoconference or to travel by train for events in Europe.

Given our topics of research, their environmental impact is modest. However, we have cooperated in the last three years with EDF though a CIFFRE PhD on quantum algorithms for optimisation problems with applications in fleet electric vehicle charging.

The paper “Complete and tractable machine-independent characterizations of second-order polytime. FoSSaCS 2022” 24 has received the EATCS best paper award at ETAPS 2022 (ETAPS best papers). It provides a first tractable (i.e., decidable in polynomial time), implicit (no external knowledge on the complexity), and complete (capturing all functions) characterization of the class of Basic Feasible Functionals (BFF), that is, due to Cook-Kapron Theorem, considered to be the mainstream notion for polynomial time on second-order programs. This solves an issue that was opened for 20 years.

A space with a structure has the fixed-point property if every structure-preserving function from the space to itself has a fixed-point. We investigate which spaces have the fixed-point property for computable multivalued functions. We prove in particular that among the countably-based topological spaces, they are exactly the omega-continuous domains, an important class coming from semantics of programming languages. We apply our results to identify the complexity of indexing a basis of a topological space. Our article has been accepted in the Annals of Pure and Applied Logic17.

We investigated the relationship between the computability of sets and their topological properties. More precisely, we are studying which sets have “computable type”, which is the property that any algorithm that semidecides the set can be converted into an algorithm that fully decides the set. We have obtained a topological characterization of this property for a large class of sets, namely the finite simplicial complexes. Our results give an easy way of determining whether a given set has computable type. In particular we have settled the question for famous sets from topology: the dunce hat, and Bing's house. An article is currently submitted.

We have also obtained a partial characterization of this property using homology theory. Essentially, this property is related to the fact that every point belongs to a cycle, a notion provided by homology, extending the notion of cycle in a graph. We obtained a complete characterization of the finite simplicial complexes having computable type, the result was presented in ICALP 20.

We designed new solutions to the cellular automata parity problem 41. The model is an interacting particles system, that is, a particular kind of stochastic cellular automton where cells are updated by pairs, randomly chosen at each time step. We analysed the convergence properties of two rules and show that they possess the required properties to classify the parity of the initial configurations. We present a formal analysis of the classification time, as well as numerical simulations, to establish that the classification time scales quadratically with the number of cells.

Our work on a bio-inspired mechanism for data clustering has finally been published 28. Our method uses amoebae which evolve according to cellular automata rules: they contain the data to be processed and emit reaction-diffusion waves at random times. The waves transmit the information across the lattice and cause other amoebae to react, by being attracted or repulsed. The local reactions produce small homogeneous groups which progressively merge and realize the clustering at a larger scale.

We continued our work with Régine Marchand (IECL, Université de Lorraine) and Irène Marcovici (IECL, Université de Lorraine) on the problem of detecting failures in a distributed network 53. The question that drives our research is to find out how we can detect that the failure rate has exceeded a given threshold without any central authority when some components progressively break down.

The special issue on complex systems following the SOLSTICE'19 conference was published. Entitled "Discrete Models of Complex Systems: recent trends and analytical challenges”, it contains 13 papers devoted to exploring differents topics related to the theme of complex sytems and non-classical models of computation (cellular automata, boolean networks, etc.) 15.

One of the main open question in the field of symbolic dynamics is to
decide whether the conjugacy problem is computable. This is a problem on
shifts of finite type, that can be reformulated in terms of matrices.
Two matrices

In 55 we show how to use results from category theory, in particular the concept of PROPs as used for instance in the ZX-calculus, to provide a purely categorical version of this question. Rather than just a technical exercise, we show in the same paper that we can recover many invariants of strong shift equivalence rather surprisingly by considering preexisting categories with bialgebras that are well known in mathematics. Whether the approach could lead to the computability of strong shift equivalence remains to be seen.

This paper was presented this year in the conference (without proceedings) ACT.

In collaboration with Massimo Amato and Lucio Gobbi (Bocconi University and University of Trento), we developed some economic and operational foundations of a new method of financing companies’ financial obligations 51. In this new banking business model, a network funder sets an optimal combination of netting and financing. Given a network of companies and their respective invoices, and under the condition of a full settlement of the invoices, we applied a multilateral netting algorithm to the network, conceived as an oriented multi-graph. Our problem, which is NP-complete, was to find a set of invoices which maximises the amount of debt reduced given a quantity of loanable funds. After designing a policy which finds a trade-off for the funding, we tested our methods on an empirical dataset from an electronic invoicing operator consisting of more than 60,000 companies. The first results show that this method is economically significant and feasible and these methods are now investigated in more details in the MURENE project, with the PhD of Joannès Guichon.

In collaboration with Pierre Guillon, Guillaume Theyssier and Kévin Perrot (respectively I2M, I2M, LIS, all Aix-Marseille University), we obtained very general complexity lower bounds on automata networks, a type of dynamical system that resembles generalized cellular automata. Any problem expressible in Monadic Second-Order logic on a reasonable condition that satisfies a mild technical condition is either trivial, or NP-hard, or coNP-hard. In particular it is not possible to design a polynomial-time solvable question on automata networks (even on purpose). Recently we showed that, should the mild technical condition be negated, either the MSO problem is also computationally hard, or standard complexity assumptions do not hold.

In 16, we have provided a characterization of the class of Basic Feasible Functionals (BFF), the second-order counterpart of the class FP of first-order functions computable in polynomial time. Several characterizations have been suggested in the literature, but none of these present a programming language with a type system guaranteeing this complexity bound. The characterization of BFF based on an imperative language with oracle calls using a tier-based type system whose inference is decidable. BFF is exactly the class of second-order functionals in the simply-typed lambda-closure of functions computable by typed and terminating programs.

The result of 16 has been improved in 24, where it is shown that:

To sum up, 24 provides a first implicit, tractable, and complete and, hence, has solved a problem opened for 20 years. This paper has obtained the EATCS best paper award at ETAPS 2022.

In 21, we introduce a new kind of expectation transformer for a mixed classical-quantum programming language. This semantic approach relies on a new notion of a cost structure, which we introduce, and which can be seen as a specialization of the Kegelspitzen of Keimel and Plotkin. This weakest precondition analysis is both sound and adequate with respect to the operational semantics of the language. Using the induced expectation transformer, a formal analysis methods for the expected cost analysis and expected value analysis of classical-quantum programs can be performed. The usefulness of this technique is illustrated by computing the expected cost of several well-known quantum algorithms and protocols, such as coin tossing, repeat until success, entangled state preparation, and quantum walks.

This is a joint project with Luca Ferrari (University Florence, Italy) and his PhD student Lapo Cioni (who visited Loria for a month in the Fall 2021).

The bubblesort operator

In the article 14, we study preimages of permutations under the bubblesort operator

This paragraph concerns three papers published in 2022, which already appeared as preprints in RA of previous years. They are joint papers with Frédérique Bassino (LIPN, Université Paris Nord), Michael Drmota (TU Wien, Austria), Valentin Féray (IECL, Université de Lorraine), Lucas Gerin (CMAP, École Polytechnique), Mickaël Maazoun (Oxford's Department of Statistics, UK) and Adeline Pierrot (LISN, Université Paris-Sud).

Cographs form one of the simplest hereditary graph classes.
They can be defined as the graphs not containing the path with 4 vertices as an induced subgraph.
A characterization of cographs which is essential for us is the following:
cographs are the graphs which can be encoded by trees in a specific family, called cotrees.

In 12 and 11, we are interested in the asymptotic properties of uniform random cographs when the number of vertices tends to infinity. To achieve these results, we use a methodology inspired from our previous work on permutation, the latest paper of this series being 13.

In the first paper 12, we prove convergence towards a Brownian limiting object in the space of graphons, which we call the Brownian cographon.
We then show that the degree of a uniform random vertex in a uniform cograph with

In the second paper 11, we focus on independent sets (or equivalently, cliques) in uniform random cographs.
First, we prove that, with high probability as

In 11, we also prove permutation analogues of all the results mentioned above.

This is a joint project with Frédérique Bassino (LIPN, Université Paris Nord), Valentin Féray (IECL, Université de Lorraine), Lucas Gerin (CMAP, École Polytechnique) and Adeline Pierrot (LISN, Université Paris-Sud). With this group of authors, we have an established collaboration, started approximately ten years ago and which continues until today (see also the previous paragraph). The common theme of the research we do together is to establish limit shape results for constrained combinatorial structures, using methods from analytic combinatorics (which is original in the landscape of the research on this topic). More precisely, we study families of permutations or graphs defined by the avoidance of substructures, and we answer (formally) the (informally phrased) question: “if we choose uniformly at random an object of large size in the considered family, what does it look like?”.

In the paper 35, we consider the three following families of graphs: distance-hereditary graphs, 2-connected distance-hereditary graphs and 3-leaf power graphs (the latter two being subclasses of the first one). We prove that the scaling limit of uniform random graphs in each of these families, with respect to the Gromov–Prokhorov topology, is the famous Brownian Continuum Random Tree of Aldous. Although such results are quite expected for families of graphs that are “almost trees” (like the ones we consider), our approach to establish this result is original, relying on the split decomposition of graphs (from the graph algorithms and graph theory literature) and on analytic combinatorics.

In earlier work with Michael Albert and Valentin Féray, we compared the expressibility of two logics on permutations, called TOOB (theory of one bijection, seeing permutations as a bijection) and TOTO (theory of two orders, seeing permutations as a pair of total orders).
In the recent paper 32,
we focus on TOTO, and study a different problem.
Namely, we investigate the existence of 0/1 or convergence laws when the domain is restricted to families of permutations avoiding patterns, similarly to a classical approach in the study of graphs.

Specifically, we prove that the class of 231-avoiding permutations satisfies a convergence law (but not a 0/1 law). In other words, for any first-order sentence

The results presented here have been obtained by Benjamin Testart during his Master internship and the first few weeks of his PhD thesis.
They are concerned with inversion sequences, which are integer sequences

The paper 43, solves the final case by making use of a decomposition of inversion sequences avoiding the pattern 010. This decomposition needs to take into account the maximal value, and the number of distinct values occurring in the inversion sequence. The method is then expanded to solve the enumeration of inversion sequences avoiding the pairs of patterns

This year, we have contributed in several ways to the foundations and the applications of the ZX-calculus, a diagrammatic language for quantum computing.

One of the main contribution concerns the development of a framework for adding two ZX-diagrams. In the ZX-calculus, each diagram represent a matrix. The formalism makes it easy to compose them sequentially or in parallel (which corresponds mathematically to multiplication and tensor product), but there is no easy way to obtain, from two diagrams representing matrices

Addition might not appear at first as an important property when manipulating diagrams that represents quantum evolution, but it appears naturally when trying to automatically differentiate diagrams: Indeed, the differentiation of a product

This framework was developed by E. Jeandel, S. Perdrix and M. Veshchezerova and was presented this year in FSCD 2022 30.

Coherent control of quantum computation can be used to improve some quantum protocols and algorithms.

This year, we have extended the PBS-calculus 48, a graphical language for coherent control which is inspired by quantum optics. This language can be used to describe coherently controlled quantum computation. Our main contribution is the development of procedure for optimising the resources required by a PBS-diagram for solving a particular task: we introduce an efficient procedure to minimise the number of oracle queries of a given diagram. We also consider the problem of minimising both the number of oracle queries and the size of the diagram. We show that this optimisation problem is NP-hard in general, but introduce an efficient heuristic that produces optimal diagrams when at most one query to each oracle is required.

This result has been presented at MFCS'22 29.

We have started this year a fruitful collaboration with the start-up Quandela, which is working on the development of an optical-based quantum computer. The objective was the development of a graphical but formal language for describing quantum computing based on quantum optical devices. In collaboration with the Inria Quacs team, we have introduced the LOv-calculus, a graphical language for reasoning about linear optical quantum circuits with so-called vacuum state auxiliary inputs. We equiped the language with an equational theory that we proved to be sound and complete: two LOv-circuits represent the same quantum process if and only if one can be transformed into the other with the rules of the LOv-calculus. We give a confluent and terminating rewrite system to rewrite any polarisation-preserving LOv-circuit into a unique triangular normal form, inspired by the universal decomposition of Reck et al. (1994) for linear optical quantum circuits.

This paper has been presented at MFCS'22 22. Notice that based on this result, we have more recently introduced the fist complete equational theory for quantum circuits, solving a 30-year open problem for the most comonly used model in quantum computing. This last result is currently available as a pre-print 39.

The Scalable ZX-calculus is a compact graphical language used to reason about linear maps between quantum states. These diagrams have multiple applications, but they frequently have to be constructed in a case-by-case basis. In this work we present a method to encode quantum programs implemented in a fragment of the linear dependently typed Proto-Quipper-D language as families of SZX-diagrams. We define a subset of translatable Proto-Quipper-D programs and show that our procedure is able to encode non-trivial algorithms as diagrams that grow linearly on the size of the program.

This paper has been presented at QPL'2022 27.

Variational Quantum Algorithms 58, 57, 50 are hybrid classical-quantum algorithms where classical and quantum computation work in tandem to solve computational problems. These algorithms create interesting challenges for the design of suitable programming languages, because they have to be able to accommodate both classical and quantum programming primitives simultaneously.

As part of this research project, we develop a set of libraries for the Idris 2 programming language that enable the programmer to implement (variational) quantum algorithms where the full power of the elegant Idris language works in synchrony with quantum programming primitives that we introduce. The two key ingredients of Idris that make this possible are (1) dependent types which allow us to implement unitary (i.e. reversible and controllable) quantum operations; and (2) linearity which allows us to enforce fine-grained control over the execution of quantum operations that ensures compliance with the laws of quantum mechanics. We demonstrate that our libraries, named Qimaera, are suitable for variational quantum programming by providing implementations of the two most prominent variational quantum algorithms – QAOA 50 and VQE 58. To the best of our knowledge, this is the first implementation of these algorithms that has been achieved in a type-safe framework.

The results of this work are described in a preprint 40, that has been accepted in 2023 in the conference ESOP. The software is open-source, available
under the MIT license here. These
results were obtained during the (bachelor) internship of Liliane-Joy Dandy in
our team and she was awarded the "Research Internship Prize" at École
Polytechnique for her work.

The results in the previous research project show how we may approach variational quantum programming within an existing programming language. As part of this research project, we adopt a more theoretical and formal viewpoint and we consider a type system for variational quantum programming and we show how to design a suitable mathematical semantics for it.

In particular, we consider a programming language that can manipulate both classical and quantum information. Our language is type-safe and designed for variational quantum programming. The classical subsystem of the language is the Probabilistic FixPoint Calculus (PFPC), which is a lambda calculus with mixed-variance recursive types, term recursion and probabilistic choice. The quantum subsystem is a first-order linear type system that can manipulate quantum information. The two subsystems are related by mixed classical/quantum terms that specify how classical probabilistic effects are induced by quantum measurements, and conversely, how classical (probabilistic) programs can influence the quantum dynamics. We also describe a sound and computationally adequate denotational semantics for the language. Classical probabilistic effects are interpreted using a recently-described commutative probabilistic monad on DCPO. Quantum effects and resources are interpreted in a category of von Neumann algebras that we show is enriched over (continuous) domains. This strong sense of enrichment allows us to develop novel semantic methods that we use to interpret the relationship between the quantum and classical probabilistic effects. By doing so we provide the first denotational analysis that relates models of classical probabilistic programming to models of quantum programming.

The results of this project are published in POPL'22 25.

This work is in collaboration with Simon Martiel (Atos). An important subset of quantum circuits is the group of Clifford circuits. It is efficiently simulable classically but only needs the addition of a single non-Clifford gate, usually the single-qubit T gate, to become universal. It is very common to target the universal gate set Clifford+T for a fault-tolerant implementation of quantum circuits. Being able to compile and optimize Clifford circuits, particularly the two-qubit count or depth constitute a crucial task in the development of the quantum computing stack. In 38 we propose a framework for the compilation of Clifford isometries. This framework encompass previous normal forms and permit to derive efficient synthesis algorithms. We demonstrate the use of this framework by benchmarking several synthesis algorithms derived from it and showing improvements of the 2-qubit count and depth for circuits taken from quantum chemistry experiments.

This work is in collaboration with Simon Martiel (Atos). When compiling quantum circuit with the gate set Clifford+T, one usually tries to minimize the number of T gates. This is because T gates are usually more costly to implement fault-tolerantly. This minimization problem when there are no Hadamard gates in the circuit is well understood. To handle the general case one can optimize around Hadamard gates or use some gadgetisation techniques. In both cases minimizing the number of Hadamard gate in the circuit beforehand can also help reducing the number of T gates overall. Lead by Vivien Vandalele for his PhD we are currently in the process of writing a paper tackling this problem.

This work was done in collaboration with Anthony Leverrier (Inria Paris) and Simon Apers (IRIF, CNRS). Our work on quantum pin codes 19 has been published in IEEE Transactions on Information Theory. In this work we defined a purely combinatorial structure, quantum pin codes, generalizing quantum color code which are defined from certain tessellations of manifolds and which feature some interesting properties for the implementation of transversal phase gates. This work was done in collaboration with Nikolas Breuckmann (University of Bristol)

Our work on quantum XYZ-product codes 18, has been published in Quantum. In this work we define a quantum code construction and analyze some of its properties such as the dimension and bounds on the minimal distance.

Quantum error correcting code with good encoding rates promise to reduce the cost of fault-tolerant quantum computation by reducing the number of physical qubits needed for a target level of protection and number of logical qubit needed. The savings come at the price of more complex procedures for the implementation of gates on logical qubits within the code. One technique for homological codes from 2D manifold is to apply a Dehn twist to a handle of the surface which implements a CNOT between the logical qubits of the handle. Lead by Alexandre Guernut for his PhD we are studying the generalization of Dehn-twists to other 2D codes, namely color codes.

This work is a collaboration in progress with Barbara Terhal (Delft University) and Allessandro Ciani (Forschungszentrum Jülich).
Quantum systems in real laboratories do not always consist in a set of qubits but often systems with a richer structure for their Hilbert space.
For instance they are often infinite dimensional, as are quantum oscillators or quantum rotors.
Exploiting the structure and knowledge of the full physical system when designing quantum error correcting codes is a promising way of reducing the overhead of error correction.
Quantum error correction with quantum oscillators have been well studied either for encoding qubits within quantum oscillators (the field of bosonic codes) or encoding oscillators in several oscillators for which several no-gos have been proven.
Quantum rotors can be thought of intermediate systems between qubits (finite) and quantum oscillators (infinite and continuous).
We are studying quantum error correcting codes for quantum rotors both encoding finite or infinite logical information.
The codes we defined encoding finite systems have some similarity with so-called protected superconducting qubits such as the 0-

The team is supervising two CIFRE PhDs in collaboration with industry partners.

One is a partnership with EDF: Margarita Veshchezerova worked on “Quantum Computing for Combinatorial Optimisation” under the supervision of Emmanuel Jeandel and Simon Perdrix from the team, and Marc Porcheron at EDF. Margarita Veshchezerova defended her PhD thesis on December 16th 2022.

One is with ATOS: Vivien Vandaele is working on “Optimisation du calcul quantique tolérant aux fautes par le ZX-Calculus” under the supervision of Simon Perdrix and Christophe Vuillot from the team, and Simon Martiel at ATOS.

HPCQS project on cordis.europa.eu

In the last few years we have seen unprecedented advances in quantum information technologies. Already quantum key distribution systems are available commercially. In the near future we will see waves of new quantum devices, offering unparalleled benefits for security, communication, computation and sensing. A key question to the success of this technology is their verification and validation.

Quantum technologies encounter an acute verification and validation problem. On the one hand, since classical computations cannot scale-up to the computational power of quantum mechanics, verifying the correctness of a quantum-mediated computation is challenging. On the other hand, the underlying quantum structure resists classical certification analysis. Members of our consortium have shown, as a proof-of-principle, that one can bootstrap a small quantum device to test a larger one. The aim of VanQuTe is to adapt our generic techniques to the specific applications and constraints of photonic systems being developed within our consortium. Our ultimate goal is to develop techniques to unambiguously verify the presence of a quantum advantage in near future quantum technologies.

Despite its relatively small size, the French quantum computing research community has always been at the forefront of international research. It thus provides the foundations for an ambitious strategy aiming at:

Algorithmic aspects are key in the field of quantum computing which witnesses a tremendous intensification of research efforts worldwide. Indeed, in addition to determining the design and the construction of hardware quantum processors, algorithms also constitute the interface through which users will solve their practical use cases leading to potential economic gain. Based on the outstanding French position, our project aims at developing algorithmic techniques for both noisy quantum machines (NISQ) and fault-tolerant ones so as to facilate their practical implementation. To this end, a first Work Package (WP) is dedicated to algorithmic techniques, a second one focuses on computational models and languages so as to facilitate the programming of quantum machines and to optimize the code execution steps. Lastly, the third WP aims at developing the simulation techniques of quantum computers.

Nazim Fatès is the head of the IFIP working group 1.5 on Cellular Automata and Discrete Complex Systems.

Romain Péchoux has been expert and referee for the HORIZON MSCA call.

Nazim Fatès was a member of a working group, lead by the INRS Institute1, which conducted a foresight exercises devoted to the theme “L’intelligence artificielle au service de la santé et sécurité au travail, enjeux et perspectives à l’horizon 2035”. The working group, composed of about twenty persons from the academic and industrial areas, presented their work on November 18, 2002, at the Maison de la RATP in Paris, with 90 in-person participants et more than 400 participants online. A 226-page report was produced and a short synthesis document was diffused.