The goal of the Mocqua team is to tackle challenges coming from the emergence of new or future computational models. The landscape of computational models has indeed changed drastically in the last few years: the complexity of digital systems is continually growing, which leads to the introduction of new paradigms, while new problems arise due to this larger scale (tolerance to faulty behaviors, asynchronicity) and constraints of the present world (energy limitations). In parallel, new models based on physical considerations have appeared. There is thus a real need to accompany these changes, and we intend to investigate these new models and try to solve their intrinsic problems by computational and algorithmic methods.

While the bit remains undeniably the building block of computer architecture and software, it is fundamental for the development of new paradigms to investigate computations and programs working with inputs that cannot be reduced to finite strings of 0's and 1's. Our team will focus on a few instances of this phenomenon: programs working with qubits (quantum computing), programs working with functions as inputs (higher-order computation) and programs working in infinite precision (real numbers, infinite sequences, streams, coinductive data, ...).

While it can be argued that the quantum revolution has already happened in cryptography 47 or in optics 46, quantum computers are far from becoming a common commodity, with only a few teams around the world working on a practical implementation. In fact, one of the most commonly known examples of a quantum computer, the D-Wave 2X System, defies the usual definition of a computer: it is not general-purpose, and can only solve (approximately) a very specific hardwired problem.

Most current prototypes of a quantum computer differ fundamentally on the hardware substrate, and it is quite hard to predict which solution will finally be adopted. The landscape of quantum programming languages is also constantly evolving. Comparably to compiler design, the foundation of quantum software therefore relies on an intermediate representation that is suitable for manipulation, easy to produce from software and easily encodable into hardware. The language of choice for this is the ZX-calculus.

Regardless of the actual model that will be accepted by the industry, it is becoming clear that some of the hurdles into scaling up quantum computers from a few qubits to very large arrays will remain. As an example, current implementations of quantum computers working on hundreds of qubits indeed are not able to form and maintain all possible forms of entanglement between qubits. This raises two questions. First, does this restrict the computational power, and the supposed advantage of the quantum computer over the classical computer? Second, how to ensure that a quantum program that was designed for a theoretical quantum computer will work on the practical implementations? This will be investigated, in particular by providing static analysis methods for evaluating a priori how much entanglement a quantum program needs.

While programs often operate on natural numbers or finite structures such as graphs or finite strings, they can also take functions as input. In that case, the program is said to perform higher-order computations, or to compute a higher-order functional. Functional programming or object-oriented programming are important paradigms allowing higher-order computations.

While the theory of computation is well developed for first-order programs, difficulties arise when dealing with higher-order programs. There are many non-equivalent ways of presenting inputs to such programs: an input function can be presented as a black-box, encoded in an infinite binary sequence, or sometimes by a finite description. Comparing those representations is an important problem. A particularly useful application of higher-order computations is to compute with infinite objects that can be represented by functions or symbolic sequences. The theory works well in many cases (to be precise, when these objects live in a topological space with a countable basis 56), but is not well understood in other interesting cases. For instance, when the inputs are the second-order functionals (of type

The most natural example of a computation with infinite precision is
the simulation of a dynamical system.
The underlying space might be

From the point of view of computation, the main point of interest is the link between the long-term behavior of a system and its initial configuration. There are two questions here: (a) predict the behavior, (b) design dynamical systems with some prescribed behavior. The first will be mainly examined through the angle of reachability and more generally control theory for hybrid systems.

The model of cellular automata will be of particular interest. This computational model is relevant for simulating complex global phenomena which emerge from simple interactions between simple components. It is widely used in various natural sciences (physics, biology, etc.) and in computer science, as it is an appropriate model to reason about errors that occur in systems with a great number of components.

The simulation of a physical dynamical system on a computer is made difficult by various aspects. First, the parameters of the dynamical systems are seldom exactly known. Secondly, the simulation is usually not exact: real numbers are usually represented by floating-point numbers, and simulations of cellular automata only simulate the behavior of finite or periodic configurations. For some chaotic systems, this means that the simulation can be completely irrelevant.

Quantum Computing is currently the most promising technology to extend Moore's law, whose end is expected to be reached soon with engraving technologies struggling to reduce transistor size. Thanks to the exponential computational power quantum computing will bring, it will represent a decisive competitive advantage for those who will control it.

Quantum Computing is also a major security issue, since it allows us to break today's asymmetric cryptography. Hence, mastering quantum computing is also of the highest importance for national security concerns. Small-scale quantum computers already exist and recent scientific and technical advances suggest that the construction of the first practical quantum computers will be possible in the coming years.

As a result, the major US players in the IT industry have embarked on a dramatic race, mobilizing huge resources: IBM, Microsoft, Google and Intel have each invested huge sums of money, and are devoting significant budgets to attract and hire the best scientists on the planet. Some states have launched ambitious national programs, including Great Britain, the Netherlands, Canada, China, Australia, Singapore, and very recently Europe, with the 10-year FET Flagship program in Quantum Engineering. The French government also recently announced its Plan Quantique – a 1.8 billion euro initiative to develop quantum technologies.

While a large part of these resources are going towards R&D in quantum hardware, there is still an important need and real opportunities for leadership in the field of quantum software.

The Mocqua team contributes to the computer science approach to quantum computing, aka the quantum software approach. We aim at a better understanding of the power and limitations of the quantum computer, and therefore of its impact on society. We also contribute to ease the development of the quantum computer by filling the gap between the theoretical results on quantum algorithms and complexity and the recent progress in quantum hardware.

The idea of considering functions as first-class citizens and allowing programs to take functions as inputs has emerged since the very beginning of theoretical computer science through Church's

One of the central problems is to design programming languages that capture most of, if not all, the possible ways of computing with functions as inputs. There is no Church thesis in higher-order computing and many ways of taking a function as input can be considered: allowing parallel or only sequential computations, querying the input as a black-box or via an interactive dialog, and so on.

The Kleene-Kreisel computable functionals are arguably the broadest class of higher-order continuous functionals that could be computed by a machine. However their complexity is such that no current programming language can capture all of them. Better understanding this class of functions is therefore fundamental in order to identify the features that a programming language should implement to make the full power of higher-order computation expressible in such a language.

We aim at developing various tools to simulate and analyse the dynamics of spatially-extended discrete dynamical systems such as cellular automata. The emphasis of our approach is on the evaluation of the robustness of the models under study, that is, their capacity to resist various perturbations.

In the framework of pure computational questions, various examples of such systems have already been proposed for solving complex problems with a simple bio-inspired approach (e.g. the decentralized gathering problem 49). We are now working on their transposition to various real-world situations. For example when one needs to understand the behaviour of large-scale networks of connected components such as wireless sensor networks. In this direction of research, a first work has been presented on how to achieve a decentralized diagnosis of networks made of simple interacting components and the results are rather encouraging 5. Nevertheless, there are various points that remain to be studied in order to complete this model for its integration in a real network.

We have also tackled the question of the evaluation of the robustness of a swarming model proposed by A. Deutsch to mimic the self-organization process observed in various natural systems (birds, fishes, bacteria, etc.) 2. We now wish to develop our simulation tools to apply them to various biological phenomena where a great number of agents are implied.

We are also currently extending the range of applications of these techniques to the field of economy. We have started a collaboration with Massimo Amato, a professor in economy at the Bocconi University in Milan. Our aim is to examine how to propose a decentralized view of a business-to-business market and propose agent-oriented and totally decentralized models of such markets. Various banks and large businesses have already expressed their interest in such modelling approaches.

Programs are able to perform computations on infinite objects such as streams of bits. It enables one to perform computation on infinite objects that can be represented by such streams, like real numbers for instance. Such infinite computations are related to topology, because computable functions are necessarily continuous (a finite segment of the output only depends on a finite segment of the input). The class of topological spaces whose points can be faithfully represented by infinite streams is well-known.

We study the descriptive complexity of subsets of such spaces. The classical one is topological complexity, which is the simplest way to express a set in terms of open sets using boolean operators; we introduce the symbolic complexity of a set, which is the topological complexity of the corresponding set of streams. The symbolic complexity is what is relevant for computations. While it is known that these two notions of complexity coincide for simple spaces (countably-based spaces), we investigate what happens in general spaces.

In 24 we show that the equivalence is even computable in countably-based spaces, and that these spaces are the only ones where it is so. We investigate a larger class of spaces, called co-Polish spaces, and relate the difference between symbolic and topological complexities to a topological property of the space (Fréchet-Urysohn).

In 29 we investigate spaces of open sets of a space X and show very precisely how the relationship between symbolic and topological complexity depend on the compactness properties of X.

Several computability notions are available for subsets of the plane, or of other topological spaces. For instance, whether the famous Mandelbrot set is computable is an open problem, but from its definition it is easy to see that it is semicomputable (one can eventually detect that a point is outside, but not that it is inside).

It turns out that the topological properties of a set have a dramatic impact on its algorithmic properties: it was proved for instance in 55, 51 that if a set is homeomorphic to an n-dimensional sphere, more generally an n-dimensional manifold, then it is computable if and only if it is semicomputable. We say that the set has computable type.

The PhD project of Djamel Eddine Amir, started in October 2020 after an M2 internship, is to develop a general and systematic approach to this problem, with several questions to solve. The general goal is to identify which topological properties imply having computable type. We study how it relates to the algorithmic complexity of testing the property. We want to develop general and elegant tools to prove or disprove that a set has computable type.

Our first results are promising, as they already subsume the existing results and can give much insight into the problem. An article is in preparation.

Directly related to the theme exposed in Sec. 4.3, we continued to explore the problem of self-stabilisation, as introduced by Dijkstra in the 1970’s, in the context of cellular automata 48.
More precisely, we extended the scope of our results from

We presented a bio-inspired mechanism for data clustering 44. Our method uses amoebae which evolve according to cellular automata rules: they contain the data to be processed and emit reaction-diffusion waves at random times. The waves transmit the information across the lattice and cause other amoebae to react, by being attracted or repulsed. The local reactions produce small homogeneous groups which progressively merge and realise the clustering at a larger scale. Despite the simplicity of the local rules, interesting complex behaviour occurs, which make the model robust to various changes of its settings. We evaluated this prototype with a simple task: the separation of two groups of integer values distributed according to Gaussian laws and tested it on the famous Iris dataset by Fischer1.

We deepened the analysis of the problem of detecting failures in a distributed network 50. The question that drives our research is to find out how we can detect that the failure rate has exceeded a given threshold without any central authority when some components progressively break down. We started a collaboration on this topic with Régine Marchand (IECL, Université de Lorraine).

Our tutorial on the convergence properties of the 256 Elementary Cellular Automata under the fully asynchronous updating was published 20. We presented a panorama of the different qualitative behaviours that arise when only one cell is updated at each time step. We made a synthesis of the results which had been presented in different articles and exposed a full analysis of the behaviour of finite systems with periodic boundary conditions. Our classification relies on the scaling properties of the average convergence time to a fixed point. We presented the different scaling laws that can be found, which fall in one of the following classes: logarithmic, linear, quadratic, exponential and non-converging. The techniques for quantifying this behaviour rely mainly on Markov chain theory and martingales. Most behaviours can be studied analytically but there are still many rules for which obtaining a formal characterisation of their convergence properties is still an open problem.

This year, we have contributed in several ways to the foundations and the applications of the ZX-calculus, a diagrammatic language for quantum computing.

Emmanuel Jeandel, Simon Perdrix, and Renaud Vilmart have published a journal paper 23 on the completeness of the ZX-calculus. This is a special issue of LiCS'18, presenting an extended version of the two contributions of the authors at LICS'18.

Ross Duncan (Strathclyde University), Aleks Kissinger (University of Oxford), Simon Perdrix, and John van de Wetering (Radboud University Nijmegen), have introduced a ZX-based optimising method of quantum circuits. This method strongly relies on the GFlow technics originally introduced by Browne, Kashefi, Mhalla and Perdrix in the context of Measurement-based quantum computing. The method introduced in this paper has been implemented by two of the authors (Kissinger and van de Wetering) leading to one of the currently best quantum circuit optimisation methods 52.

Emmanuel Jeandel and Titouan Carette have presented a complete characterization of all possible graphical languages for quantum computing. Surprisingly, all possible languages had already been found in the form of the ZX-calculus, the ZW-calculs and the ZH-calculus. This work was published at ICALP 25.

Alexandre Clément and Simon Perdrix have introduced a new graphical language, the PBS-calculus, to represent and reason on quantum computations involving coherent control of quantum operations. Coherent control, and in particular indefinite causal order, is known to enable multiple computational and communication advantages over classically ordered models like quantum circuits. The PBS-calculus is inspired by quantum optics, in particular the polarising beam splitter. The language is equipped with an equational theory, proved to be sound and complete: two diagrams are representing the same quantum evolution if and only if one can be transformed into the other using the rules of the PBS-calculus. The equational theory is also proved to be minimal. This article has been published at MFCS'20 26.

Analyzing pseudo-telepathy graph games, Anurag Anshu, Peter Høyer, Mehdi Mhalla, and Simon Perdrix proposed a way to build contextuality scenarios exhibiting the quantum advantage using graph states. A new tool, called multipartiteness width, is introduced to investigate which scenarios are hard to decompose and to show that there exist graphs generating scenarios with a linear multipartiteness width. These results have been published in the Journal of Computer and System Science 13.

Inductive datatypes in programming languages allow users to define useful data structures such as natural numbers, lists, trees, and others. However, in quantum programming, working with such datatypes is more complicated, because quantum computer science is usually concerned with finite-dimensional quantum structures, whereas inductive quantum datatypes are inherently infinite-dimensional.

In this work, we show how inductive datatypes may be used in quantum programming by describing a quantum programming language which has a formal syntax and a type-safe operational semantics. We also describe a sound mathematical model for the language and by doing so we provide the first detailed semantic treatment of user-defined inductive datatypes in quantum programming. Our semantics is entirely based on a physically natural model of von Neumann algebras, which are mathematical structures used by physicists to study quantum mechanics. Finally, we cement our results by showing our mathematical model is also computationally adequate in a strong sense.

This work was published in FoSSaCS 2020 30 and an extended version was submitted to the Theoretical Computer Science journal (preprint: 43).

Recursive datatypes in programming languages generalise inductive datatypes by allowing the programmer to construct more expressive types (e.g. streams) without imposing any restrictions on the admissible logical polarities of the underlying type expressions. As part of a larger program, we wish to understand how this can be achived in quantum computation. However, before this question may be answered, it is useful to understand how such types behave in the setting of Substructural Type Systems, which quantum programming is a part of.

In this work, we describe a substructural type system with mixed linear and non-linear recursive types called LNL-FPC (the linear/non-linear fixpoint calculus). Just as in FPC, we show that LNL-FPC supports type-level recursion which in turn induces term-level recursion. We also provide sound and computationally adequate categorical models for LNL-FPC which describe the categorical structure of the substructural operations of Intuitionistic Linear Logic at all non-linear types, including the recursive ones. In order to do so, we describe a new technique for solving recursive domain equations. We also show that the requirements of our abstract model are reasonable by constructing a large class of concrete models that have found applications in classical programming, but also in emerging programming paradigms such as quantum programming and circuit description programming languages.

This work has been accepted for publication at the journal Logical Methods in Computer Science (subject to minor revisions which will be done soon) and a preprint is available at 41. This work is also an extended version of our paper that was published in ICFP'19 54.

Quantum algorithms and protocols are often described in terms of quantum circuits, which are diagrammatic representations of quantum primitives and operations. In this work, we show how a programming language can be used to construct such quantum circuits by describing a syntax, type-safe operational semantics (with recursion) and also by constructing a mathematical model for the language which is based on ideas from enriched category theory. This work is accepted for publication at the special issue on Outstanding Contributions to Logic (Volume for Samson Abramsky) and a preprint is available at 42. It is an extended version of our LICS'18 paper 53.

In this work we consider recursion for quantum and substructural programming languages and we describe a large class of mathematical models for these languages. We identify the fundamental categorical structures that are required to establish semantic properties such as soundness and adequacy. The results are presented at a large level of generality and we recover many known concrete models as special cases of our treatment. This work has been described in papers presented at ACT 2020 31 and CMCS 2020 32.

We adapt the tiering technique to an imperative programming language with oracles. This work is inspired by the work of Kapron and Steinberg and the restrictions they developed on oracle Turing machines in order to characterize the class of second order polynomial time computable functionals, called Basic Feasible Functionals (BFF), and known to be the second order extension of the standard notion of (first order) polynomial time computable function. We have provided the first tractable characterization of BFF using a tier-based type discipline. This work has been described in a paper presented at LICS 2020 27.

We adapt the parsimonious calculus introduce by Mazza and Terui to the case of functions over the reals. The parsimonious calculus is a programming language on streams (infinite lists), which uses a linear logic based type discipline. Real numbers are encoded as infinite sequences of signed binary digits. We show that, under some restrictions, the calculus is sound and complete for functions computable in polynomial time over the reals in the sense of Ko; that is, the n-th digit of the output can be computed in time polynomial in

Phylogenetic networks generalize phylogenetic trees, and have been introduced in order to describe evolution in the case of transfer of genetic material between coexisting species.
There are many classes of phylogenetic networks, which can all be modeled as families of graphs with labeled leaves.
In this work, we focus on rooted and unrooted level-

Among the topics at the interface of combinatorics and probability theory,
the study of non-uniform random permutations has recently received increased interest.
This is notably due to the definition of the permuton framework to describe limits of permutations,
and to the description of permuton limits of some families of constrained permutations.
Permutons are probability measures on the unit square with uniform marginals,
which naturally extend to the continuous setting the representation of permutations via
their diagrams, or equivalently their permutation matrices.

In this work, we consider constrained random permutations, taken uniformly inside proper substitution-closed classes and we study their limiting behavior in the sense of permutons. The limit depends on the generating series of the simple permutations in the class. Under a mild sufficient condition, the limit is an elementary one-parameter deformation of the limit of uniform separable permutations, which we previously identified as the Brownian separable permuton. This limiting object is therefore in some sense universal. We identify two other regimes with different limiting objects. The first one is degenerate; the second one is nontrivial and related to stable trees. These results are obtained thanks to a characterization of the convergence of random permutons through the convergence of their expected pattern densities. The limit of expected pattern densities is then computed by using the substitution tree encoding of permutations and performing singularity analysis on the tree series. This work has been published in 15.

Project acronym: ANR PRCE SoftQPro (ANR-17-CE25-0009)

Project title: Solutions logicielles pour l'optimisation des programmes et ressources quantiques.

Duration: Dec. 2017 - Dec. 2022

Coordinator: Simon Perdrix

Other partners: Atos-Bull, LRI, CEA-Saclay.

Participants: Simon Perdrix, Emmanuel Jeandel, Emmanuel Hainry, and Romain Péchoux

Abstract: Quantum computers can theoretically solve problems out of reach of classical computers. We aim at easing the crucial back and forth interactions between the theoretical approach to quantum computing and the technological efforts made to implement the quantum computer. Our software-based quantum program and resource optimisation (SoftQPRO) project consists in developing high level techniques based on static analysis, certification, transformations of quantum graphical languages, and optimisation techniques to obtain a compilation suite for quantum programming languages. We will target various computational model back-ends (e.g. QRAM, measurement-based quantum computations) as well as classical simulation. Classical simulation is central in the development of the quantum computer, on both ends: as a way to test quantum programs but also as a way to test quantum computer prototypes. For this reason we aim at designing sophisticated simulation techniques on classical high-performance computers (HPC).

Project acronym: ANR PRCI VanQuTe (ANR-17-CE24-0035)

Project title: Validation of near-future quantum technologies.

Duration: Fev. 2018 - Jan. 2022

Coordinator: Damian Markham (Laboratoire d'informatique de Paris 6)

Other partners: NTU (Nanyang Technological University), SUTD (Singapore University of Technology and Design), NUS (Nationl University of Singapore), LIP6 (Laboratoire d'informatique de Paris 6)

Participants: Simon Perdrix, Emmanuel Jeandel

Abstract: In the last few years we have seen unprecedented advances in quantum information technologies. Already quantum key distribution systems are available commercially. In the near future we will see waves of new quantum devices, offering unparalleled benefits for security, communication, computation and sensing. A key question to the success of this technology is their verification and validation.

Quantum technologies encounter an acute verification and validation problem: On one hand, since classical computations cannot scale-up to the computational power of quantum mechanics, verifying the correctness of a quantum-mediated computation is challenging. On the other hand, the underlying quantum structure resists classical certification analysis. Members of our consortium have shown, as a proof-of-principle, that one can bootstrap a small quantum device to test a larger one. The aim of VanQuTe is to adapt our generic techniques to the specific applications and constraints of photonic systems being developed within our consortium. Our ultimate goal is to develop techniques to unambiguously verify the presence of a quantum advantage in near future quantum technologies.

Quantex.
Project acronym: PIA-GDN/Quantex. (initially an ITEA3 project finally funded by the Grands défis du Numérique / Programme d'investissements d'avenir).

Project title: Simulation/Emulation of Quantum Computation.

Duration: Feb. 2018 - Jan 2021.

Coordinator: Huy-Nam Nguyen (Atos Bull).

Other partners: Atos-Bull, LRI, CEA Grenoble.

Participants: Simon Perdrix (WP leader), Emmanuel Jeandel

Abstract: The lack of quantum computers leads to the development of a variety of software-based simulators to assist in the research and development of quantum algorithms. This proposal focuses on the development of a combined software-based and hardware-accelerated toolbox for quantum computation. A quantum computing stack including specification language, libraries and optimisation/execution tools will be built upon a well-defined mathematical framework mixing classical and quantum computation. Such an environment will be dedicated to support the expression of quantum algorithms for the purpose of investigation and verification.

Nazim Fatès co-supervised a project for making the scenario of short films on artificial intelligence, with eight Master's 1 students from the IECA (Institut Européen de Cinéma et d'Audiovisuel).

Nazim Fatès participated to a one-day colloquium dedicated to the theme of transhumanism and presented a talk entitled « Penser avec l'intelligence artificielle » in the Lycée Saint-Sigisbert de Nancy, March 2020.

Nazim Fatès gave a three-hour lecture on the history of computing and artificial intelligence for a formation of high-school teachers of the region (Diplôme Inter-Universitaire Enseigner l'Informatique au Lycée).