Information-processing devices that can take advantage of the laws of quantum theory have an important potential in terms of computation, communication and secrecy. However, the quantum devices available today are all affected by unwanted noise: the actual behavior of the device only matches approximately with the model they were designed for. Such an unwanted deviation from the model can have devastating effects for the information processing applications: for example, in the context of quantum computation, the accumulation of noise can render the outcome of the computation completely useless. This project aims to develop methods and algorithms to optimally reduce the undesirable effect caused by noise on quantum information processing tasks.

Our overarching objective is to develop mathematical techniques and algorithms to make full use of the potential of quantum technologies. Our research is decomposed into three research directions. The first axis aims to develop methods to characterize and certify the relevant quantum properties of currently available quantum information processing devices, including so-called noisy intermediate scale quantum (NISQ) devices, as well as explore their applications. The second axis is motivated by applications on a longer time scale and its objective is to develop general methods to correct the errors that occur in quantum devices and reduce/eliminate their effect on the computations. The third axis considers new quantum models and resources that promise to help in finding new applications of quantum technologies.

The last years have seen a dramatic increase in both the size and quality of quantum computing architectures. They have now reached a point where they are very hard to simulate even with the best classical computers available. Nevertheless, significant challenges have to be overcome to scale current technologies and use them to solve practically relevant problems. The first challenge is in obtaining accurate mathematical models of such quantum devices, including their inevitable imperfections. The second challenge is in understanding the information processing abilities of such models. The objective of this research axis is to tackle these two challenges by designing efficient methods for the characterization and certification of quantum devices, exploring the limitations imposed by noise on the computational power and studying the applications of current quantum devices to optimization algorithms and to device-independent cryptography.

Obtaining an accurate mathematical characterization of the quantum systems that are prepared in the lab is a pressing question for quantum technologies. For this reason, there has been very important progress on such statistical questions in the last few years. This includes the answer to foundational questions such as the number of samples needed to characterize an unknown quantum state, improved methods for characterizing quantum devices, and very recently techniques that can very efficiently predict multiple relevant properties of quantum systems. We plan to contribute to these lines of work by considering several questions all going in the direction of better characterization of quantum systems.

First, we will consider basic statistical questions related to testing relevant properties of quantum states. In particular, given a description of an ideal target state 49 are likely to play an important role.

Building on that, we will then develop tools to characterize the noise affecting quantum devices. As the number of parameters and samples required to characterize an arbitrary noisy process grows exponentially in the number of qubits 82, it is of paramount importance to devise protocols to find an effective ansatz for the underlying structure. The first step we will take in this direction will be to devise scalable protocols that are able to identify the correlation structure of the noise. By singling out on which parts of the device the noise acts independently and on which the noise is correlated it is possible to substantially reduce the number of parameters that are required to effectively describe it, bringing it to a tractable number. Although finding the conditional independence structure of a set of random variables to a high precision is a difficult problem even classically, we will generalize to the quantum setting efficient classical techniques that employ convex relaxations 64 to obtain good approximate solutions.

The next step will then be to devise protocols inspired from machine learning techniques that can exploit the knowledge of the underlying correlation structure to efficiently learn its parameters. This will be combined with randomized benchmarking techniques 74, 78, 72. Randomized benchmarking techniques are known to be robust and experimentally friendly, however current results either give very limited information or require stringent assumptions on the structure of the underlying noise. Thus, the goal of this part will be to overcome these two limitations, providing experimentalists with much needed tools to efficiently characterize large noisy quantum devices.

Such a line of research certainly also profits from inputs from experimentalists to test the algorithms on real quantum hardware. Thus, we plan to work with the local experimental group led by Benjamin Huard to test such methods on the devices they build. Moreover, it is invaluable to obtain input from experimentalists regarding what are the limitations and challenges they face in the lab when characterizing their devices.

An important aspect in this direction
that we will consider is the design of measurements that can probe the physical property of interest without disturbing the state by much. This is the so-called quantum non-demolition measurement (QND) and is important when one has a continuous signal which one wants to measure, since one has to measure the same system repeatedly over time and, ideally, one wants the outcomes of later measurements to depend solely on the quantity one intends to measure, and not on any disturbances caused by prior measurements. QNDs have found usages in many areas, including quantum computing and, most prominently, proposals for gravitational wave detectors with improved sensitivity. We view the problem through the lens of quantum information theory, and in this way, it can be seen that the quantum system involved in the QND, is a quantum reference frame. What’s more, there is a one-to-one relation between the reference frame imperfections, and its ability to act as a system for QND measurements. In 53, 99, we gave a construction of a QND where the error is a function of energy and dimension. Going forward, our objective is to determine whether this construction is optimal, determine the optimal tradeoff between error and energy and dimension and assess the extent to which such constructions can lead to an advantage for quantum sensing.

In order to establish a quantum advantage for noisy quantum computers, it is important to study when noisy quantum computers can be simulated classically. Intuitively, it is clear that the noise present in a quantum device imposes a limit on the circuit depth we can implement before the device loses its usefulness when compared to classical devices. In order to understand the potential of noisy quantum devices, it is crucial to develop tools to characterize when this happens given a problem and noise model. In the context of optimization, such bounds were achieved by our work 73. In short, the results of 73 show that sampling from the output of noisy quantum devices quickly becomes comparable to sampling from Gibbs states that are easy to simulate classically by giving stringent explicit bounds. This is showcased Figure 1, where we plot at which density of corrupted qubits the noisy quantum device loses advantage against classical methods.

However, in their current version, our methods only allow for an analysis of the first moments. To extend the analysis and conclusions beyond optimization to other fields like quantum machine learning, it is imperative to obtain results for higher moments and concentration inequalities for the outputs of noisy quantum circuits. That is, to quantify how much noise a quantum system can tolerate before it behaves like a state that can be easily sampled from classically. To achieve this goal, we intend to resort to and further develop methods from the emerging field of quantum optimal transport 87, 65. Optimal transport techniques are by now a well-established method to show powerful concentration inequalities 92. They are known to combine well with other areas of expertise of the group, such as entropic and semigroup methods.

Identifying good use cases for the noisy quantum devices expected to be available in the near future is one of the main current challenges faced by the quantum computing community. One possible candidate for such an application are quantum Gibbs state-sampling based methods. Quantum Gibbs states are at the core of powerful classical and quantum algorithms for optimization and machine learning based on mirror descent or the matrix multiplicative weight method 58, 55, 54. These iterative algorithms can be understood as a variation of simulated annealing, in which one starts with a (quantum) Gibbs state at infinite temperature and decreases the temperature to converge to the solution of an optimization problem. That is, we begin with a state that is supported everywhere on the state space and slowly zoom into regions that contain solutions to the problem of interest by tuning the Gibbs state. This intuitive picture conveys one feature of such methods: they are robust, especially at the first iterations, as we only need to ensure that we are zooming in the right direction. This robustness translates into them only requiring the preparation of states with relatively small precision to make progress.

On the other hand, this picture also showcases the issue noise imposes for such methods: after a while, the noise will make it impossible to zoom in further, imposing fundamental barriers onto how well we can characterize the region of solutions. Thus, it is expected that noisy quantum computers can offer useful advice as to which direction to go up to a level that naturally depends on the noise present in the device. Thus, we will design hybrid quantum-classical algorithms that explicitly take into account this limitation. They will only use the quantum computer to identify a region of a relatively small dimension that contains the solution.

At this stage, it is then possible to use powerful randomized linear algebra techniques to take advantage of the initial zooming in performed by the noisy quantum device. Techniques from randomized linear algebra offer significant speedups for basic operations under the promise that the involved matrices are supported on a small dimensional space 97. Thus, after doing the first iterations efficiently on the noisy quantum device and identifying a low-dimensional space that contains the solutions, a classical device takes over with this input and runs the later iterations much faster. Such a hybrid algorithm would lead to more efficient solvers for convex optimization problems. Although such problems can usually be solved in polynomial time, in practice it is still challenging to solve larger dimensional instances, impeding their more widespread use. Such a hybrid algorithm will increase the practicality of solving large-dimensional semidefinite programs, as the classical computer would only have to operate in the low-dimensional regime. It would also lead to provable speedups for quantum devices under noise, a goal that has so far remained elusive.

The main technical challenges that need to be overcome for the success of such an algorithm are threefold: first, carrying out a detailed analysis of the trade-offs as to when it becomes more efficient to transition from performing the optimization on the noisy quantum device to the classical computer. Second, the development of improved quantum Gibbs sampler for noisy devices to prepare the required states. Third, the identification of practically relevant problems that offer a good opportunity window for quantum speedups. The first and third challenges will profit from and are connected to the result of the previously discussed Goals 3.1.1 and 3.1.2. The second, the development of better quantum Gibbs samplers, as current proposals for Gibbs samplers require quantum circuits that are unlikely to be implementable in the near term, will certainly yield results that find applications in many other directions. Indeed, efficient classical Gibbs samplers are the bread the butter of most Monte Carlo techniques, and it is to be expected that quantum Gibbs samplers will find similar widespread application.

In the device-independent framework of quantum cryptography, protocols offer security by relying on minimal assumptions. Namely, they are secure even when the devices used within the protocol are completely untrusted or uncharacterized. The main idea behind many device-independent protocols, such as randomness expansion and quantum key distribution, is that there are certain correlations between multiple separate systems that (

This question of certification is recurrent when assessing the behaviour of quantum devices (and particularly of noisy ones), as highlighted by the issues that Goals 3.1.1 and 3.1.2 address.
We plan to develop techniques to address the certification of quantum systems with minimal assumptions.
Our objective is to first build mathematical tools in the continuity of the Entropy Accumulation Theorem 67 that allow us to make accurate statistical statements about large quantum systems. The second objective is to design computational methods 57 to certify in a quantitative way the relevant quantum properties that are consistent with the observed statistics.

For the context of device-independent cryptography, this will allow us to obtain protocols with improved noise tolerance and finite-length analysis to reach the realm of what can be done with current quantum technologies. But we believe these techniques will be applicable in the wider setting of certifying properties of quantum networks and quantum computing devices.

Noisy quantum devices are unlikely to reach the full potential of quantum computation unless some software mechanisms for correcting the errors are used. The aim of this research axis is to develop general methods to use physical quantum devices to perform logical quantum operations that are reliable even if the physical devices themselves are imperfect.

For this, we plan to build algorithmic methods to find error correction mechanisms that are tailored to a given noise model, and explore various approaches to fault-tolerant quantum computation going from Low-Density Parity-Check quantum codes to more recent methods using quantum reference frames.

Shannon's 1948 seminal theorem 89 modeled the problem of communication (or storage) over a given noisy channel and determined precisely its ultimate limit. Shannon's noisy coding theorem relates the maximum rate at which information can be transmitted reliably over a noisy channel

where the right hand side is a maximization over distributions

Devices that make use of the laws of quantum theory are also affected by noise, in fact even more so. Determining the optimal method in order to communicate (or store information) reliably over a noisy quantum channel is thus of fundamental importance in order to exploit the full potential of a quantum computer, or more generally a quantum device.
However, despite the problem's importance and more than 40 years of efforts in quantum information theory 80, 96, it is fair to say that we do not have a quantum analogue of Shannon's theorem Eq. (1). Indeed, a formula analogous to Eq. (1) for quantum channels is known only in very special cases.
As an illustration, even for the simplest possible quantum channel, called the qubit depolarizing channel, the asymptotic maximum rate of quantum communication is still unknown 66. The qubit depolarizing channel can be thought of as the quantum analogue of the channel that flips the input bit with some probability

The main difficulty in understanding the ability of a quantum channel in transmitting information is the non-additivity of the quantum entropic quantities having the form of the right hand side of Eq. (1) 66, 81, 79, 90. This challenge is due in many cases to the quantum property of entanglement and we believe that a new approach is needed to overcome this difficulty.

Faced with these difficulties, we propose a new framework for studying communication over noisy channels. Instead of trying to determine the optimal rate of communication asymptotically as the number of channel uses efficient algorithm that determines the maximum number of bits or qubits that can be sent reliably using

For the problem of classical communication over a classical channel, we have characterized this computational complexity precisely in our previous work 52 and this led to interesting connections between information theory and combinatorial optimization. The main objective here is to extend this approach to quantum channels, thereby designing algorithms that can find the best error correction schemes for a given noise model. These algorithms can naturally then be used on the noise models that are estimated using the methods developed in Axis 3.1.1.
In particular, we will focus on relevant noise models that appear in current devices. For this we plan to collaborate with Benjamin Huard in the physics lab of ENS Lyon, and the presence of Cyril Élouard in the team significantly helps in this regard. To start in this direction, Cyril has given talks within the group to explain the mathematics of superconducting qubits and we are at the moment discussing specific dissipative models that can be reasonably implemented in hardware and for such different models compare their ability to store quantum information reliably.

Having a coding strategy for a given noise model with good performance is not enough: for a strategy to be applicable, it is important to be able to implement the error correction operations efficiently.
An efficient decoding algorithm is not only important to establish fast and reliable communication networks but it is also crucial for fault-tolerant computing. In fact, the basic idea of fault-tolerant computing schemes is to perform computations on data encoded in an error correcting code. To prevent the errors that occurred during the computation from spreading, a decoding operation has to be regularly applied to correct these errors. For this reason, it is crucial for the decoding operation to be very fast to prevent the accumulation of errors.
We focus here on an important class of quantum error correcting codes called Low-Density Parity-Check (LDPC) codes 59, 91
defined by two sparse binary parity-check matrices efficient decoding algorithms for quantum LDPC codes.

Quantum LDPC codes are particularly well suited to achieve fault-tolerant quantum computation. This is because the sparsity of the parity check matrices allows us to bound the error rate of the syndrome measurements. In fact, currently the leading candidate error correcting code to be used in future quantum computers is the surface code, a special kind of LDPC code. Even though the surface code can be embedded on a surface with only nearest neighbour interactions, it suffers from a very poor encoding rate, and thus using it for fault-tolerant constructions incurs a very large memory overhead. Our previous work 71 shows that in principle the memory overhead can be significantly reduced by using constant-rate LDPC codes based on expander graphs. The general idea of using constant-rate codes is illustrated in Figure 2. Our objective is to make fault-tolerant constructions with LDPC codes practical by finding fault-tolerant gadgets for such codes and using decoding algorithms with better performance.

As mentioned before, the currently leading approach for fault-tolerance is using surface codes. In contrast to the previous goal 3.2.2, our objective here is to explore radically different approaches to fault-tolerance that could provide new avenues towards achieving fault-tolerance. In particular, we will look at one based on quantum polar codes and the other one based on quantum reference frames.

The class of quantum polar codes that has recently been proposed in 68 can be promising candidates for fault-tolerant quantum computing. The construction relies on a channel combining and splitting procedure, where a two-qubit gate randomly chosen from the Clifford group is used to combine two single-qubit channels. Applied recursively, this procedure allows synthesizing a set of so-called virtual channels from several instances of the quantum channel. When the code length goes to infinity, the virtual channels polarize, in the sense that they tend to become either noiseless or completely noisy. Interestingly, polar codes feature several extremely desirable properties: they protect a high number of logical qubits, and they have efficient decoding algorithms. In addition, logical Clifford operations can be easily performed by using code deformation like techniques.
However, there are a number of challenging issues to be addressed in the fault-tolerant computing context. First, quantum channel polarization needs to be investigated by taking into account the fact that Clifford gates used for channel-combining are faulty. Second, we need to construct a universal set of fault-tolerant gates, which can be tackled by using magic state distillation. For this approach, we plan to collaborate closely with Mehdi Mhalla (CNRS, LIG).

The second approach we consider here is based one a way of circumventing the famous Eastin-Knill theorem.
In the early days of quantum computing, one of the key ideas for building a quantum computer whose errors can be corrected, was the notion of transversal logic gates. The idea was to devise a scheme in which all the gates needed for universal quantum computation could be applied on non-overlapping subspaces in such a way that all the locally occurring errors were correctable. More specifically, the objective is to find an encoding

This scheme would allow for errors in the implementation of the gates to be corrected before they have propagated through the computation and rendered its results useless. Unfortunately, transversality of all the gates needed for universal computation and local correctability within the blocks cannot both be simultaneously satisfied for finite dimensional codes. This was proven by Eastin and Knill in a landmark paper in 2009 69. Subsequently, workarounds have been found. For example, one of the current frontrunner approaches is to apply all but one of the gates needed for universal computation transversally, while the remaining gate is applied in a non-transversal way using other costly techniques.

We have developed in a series of two papers 98, 100, a new method for quantum error correction which is not based on this approach. In this technique, all of the gates in the set needed for universal computation are treated on an equal footing. More precisely, rather than circumventing the Eastin-Knill theorem by having one non-transversal gate, all gates from the universal set can be applied transversally, and local errors corrected, but at the price of an error in the decoding. As long as the error in the decoding is kept small, it will not disrupt the computation and is thus not significant from a practical point of view. To do so, it uses quantum reference frames and randomness to encode the information about which gate was applied during the computation. As the quality of the reference frame increases, the error in the decoding approaches zero. The concept of a quantum reference frame was introduced in the field of quantum foundations in the context of sharing so-called “unspeakable information”, such as the relative orientation of two distant observes. While it has been used over the years in various problems in quantum information theory, its use in quantum error correction has yet to be fully explored.

While this work on the circumvention of the Eastin-Knill theorem has attracted a lot of attention and follow up work by other research groups (see e.g. Refs. 84, 101, 93 and 83), it is not yet ready for primetime. The reason for this, it that while the encoded states are readily fault tolerant (due to the transversality of its gates), the current protocol for applying the encoding and decoding channels are not fault tolerant. This is down to the method in which the quantum reference frames are constructed. However, we believe that finding protocols for implementing the encoder and decoder in a fault tolerant way is a surmountable challenge. We plan to use a recent construction of unitary

The predominant model of quantum computation is that of quantum circuits, and the previous two axes stay within this standard framework in their goals centered around designing and building quantum devices. In contrast to classical computation, however, in the quickly-evolving landspace of quantum information there remains significant insight to be gained by studying alternative models of computation. They may, for example, be more tolerant to realistic types of noise, provide new insight into algorithms and applications, or be better able to exploit certain quantum resources. As concrete examples, both adiabatic and measurement-based quantum computing have been extensively studied, leading to a number of important insights that have been fed back more generally into quantum information research.

By considering a higher level of abstraction, this axis explores novel models of quantum information processing in order to identify new avenues for exploiting quantum effects and outperforming classical devices, even in the presence of noise. One of the primary avenues for this is the study of higher-order quantum operations, allowing an abstract understanding of what quantum transformations are possible in principle, and the use of new resources such as quantumly-controlled operations to implement such computations.

This axis thus explores more fundamental aspects of quantum information processing, as we believe these to be highly valuable in providing new insight in quantum computing and communication. We aim to use the new models and approaches we will study to provide new techniques to mitigate noise in quantum devices, certify their behaviour more efficiently, and develop algorithms or protocols providing better quantum advantages in applications of interest. It will thus provide important insight for the previous two axes, and at the same time will make use of mathematical tools and approaches common to the themes of the project.

One of the intrinsic limitations of the standard quantum circuit model is that the structure of the circuit, and hence of the flow of information, is fixed prior to computation; quantum circuits do not allow for the possibility of a “quantum if-statement”.
In this research goal we study new models to quantum computation that, in contrast, have explicit quantum control structures.
These models, in particular, have the potential to provide new approaches to mitigate noise can lead to stronger quantum advantages in certain applications.

To study quantum control structures we work within the framework of higher-order quantum operations 60, 86, which formalise the types of ways quantum circuits or channels can themselves be transformed within quantum theory.
This approach has developed rapidly in recent years 94 since it was first used to show that one can indeed formulate quantum computations in which the order of two quantum gates is superposed with the help of a quantum control system, a gadget known as the quantum switch 61 (see Fig. 3).

The quantum switch and related computations have since been shown to provide new types of quantum advantages in several information-theoretical tasks 63, 48, 77, where they outperform even “standard” quantum circuits.
Moreover, its relevance for improving noise tolerance has recently come to light in a number of works showing how quantum control can be used to improve communication over noisy quantum channels 70, 62,47.

This progress emphasises the potential benefits in studying such models of quantum information processing, and motivates a more systematic study of quantum control models in this context.
In a first step in this direction we recently formalised a computational model strictly generalising quantum circuits, called quantum circuits with quantum control of causal order (QC-QC) that incorporate – and generalise – quantum control structures 95.
This model will serve as the base for a systematic study of the computational power of quantum computations exploiting quantum control, allowing us to understand the types of advantages this new resource of quantum control can provide.

With a better understanding of quantum computations with quantumly controlled operations, we will aim to develop algorithms for several problems where quantum control appears to be a promising problem.
Of particular interest, we will look to use it to provide new algorithms for quantum metrology and parameter estimation – both key problems that are seen as near-to-mid-term applications for quantum information – that are more efficient than existing approaches and, in particular, are more robust in the noisy versions of these problems.
An important first step we are undertaking in this direction is to generalise existing advantages obtainable with quantum circuits with quantum control of causal order from problems in a noiseless regime – where the controlled operations are unitary – to a noisy regime, where the controlled operations are noisy quantum channels.

In order to obtain such results, the mathematical tools being studied and developed in the other research axes of the proposed team, most notably convex optimisation, will be of utmost importance (e.g., Goals 3.1.3, 3.1.4 and 3.2.1).
These research goals also build on existing collaborations on quantum control of causal order with physicists at the Institut Néel in Grenoble (including on the development of QC-QCs 95), in order to transfer physical insight on quantum control towards new application for information processing.
We likewise plan to collaborate with the CAPP team at LIG to study diagrammatic calculi to understand how these new types of computations can be composed and compiled, building on existing collaborations with Mehdi Mhalla on quantum control 47.

The quantum control of quantum operations has potential as a resource throughout quantum information processing: not just for quantum computation but, e.g., also for quantum communication 77.
As an example, it can be used to send messages through a quantum network in a superposition of different paths, amounting to a novel extension of quantum Shannon theory 62.
By doing so, it has recently been shown in a simple, proof-of-principle setting, that one can notably reduce the effect of noise on the message as it traverses a network7047 and the effect experimentally verified 88.
We will study this possibility further, looking at how it can be extended to practical network topologies and aim to show how it can be exploited to improve quantum communication protocols and lead to novel approaches for quantum cryptography.

One can also generalize the model of computation one step further. In causally indefinite models of computations such as QC-QCs the relative order between gates is rendered indefinite through the use of quantum control systems.
Nonetheless, the computation itself still proceeds in the presence of a fixed, causal clock or external control.
We will seek to go one step further in the quantum-classical divide and allow for this external control to also be quantum and autonomous. This would require the addition of another quantum system implementing the quantum gates themselves. In the case of a fixed causal order, this autonomous device needs its own internal notion of time, hence it should also be an accurate quantum clock 99. Since it is quantum, this clock which controls the interactions can be prepared in a superposition of different time states, leading to new types of non-casually implemented gates and potentially novel applications.

Multipartite entanglement plays an important role in quantum protocols and in quantum games, and is likewise a key resource for measurement-based quantum computing. Nonetheless, our understanding of multipartite entanglement as a resource is much less developed than for the simpler, but important, case of bipartite entanglement. The objective of this task is develop our understanding of multipartite entanglement, how it can contribute to reducing the effect of noise in communication, computation and more generally how it can improve coordination in multipartite scenarios.

In particular, we plan consider communication problems over noisy classical networks and quantify the extent to which multipartite nonlocality can improve the transmission rates 85. Focussing on relevant classical network communication scenarios, we will ask whether entanglement between some of the involved parties significantly improve the rates.

In a related direction, we plan to study game-theoretic settings with players with divergent interests and the advantage that can be achieved by using multipartite entangled states and, in particular, quantum graph states 76. In collaboration with Mehdi Mhalla, we will aim to use such advantages to provide new approaches to certify multipartite entangled states, and in particular to self-test quantum graph states – important resources in certain quantum computational models – by certifying them solely from the correlations they produce 51, 50. We plan to use progress towards Goal 3.1.4 to provide a finer analysis of the problem.

Our work is of theoretical nature but can have an important applications on the development of quantum technologies for the near future as explained in the research directions. This includes in particular:

We developed in 41 new protocols to learn quantum many-body states and phases from data. These improved exponentially the sample complexity required for this task when compared with previous approaches and improved the classes of systems these results apply to.

In 35, we explore a comprehensive theoretical model for universal analogue quantum simulation, guided by fundamental principles initially proposed by Cirac and Zoller. We address the critical aspect of scalability, particularly emphasizing that the strength of interactions in simulations should not escalate with system size. The paper also delves into the realm of Hamiltonian complexity theory, presenting a versatile framework for gadgets, which could be intriguing independently of their application in analogue simulation. Specifically, we establish the inevitability of size-dependent scaling in Hamiltonian locality reduction. Furthermore, we introduce a novel method leveraging intermediate measurements and the quantum Zeno effect to bypass the limitations in locality reduction. This innovative gadget framework potentially lays the groundwork for formalizing and resolving longstanding questions about gadgets.

In 39 we focus on improving the precision of quantum metrology in scenarios where only a limited number of measurements are available. Traditional methods in quantum metrology, like the Cramér-Rao bound, can become uninformative with only a few samples. We propose a new approach, termed "probably approximately correct (PAC) metrology," which evaluates the quality of a metrological protocol based on the probability of obtaining an estimate within a certain accuracy given a fixed number of samples. This approach is significant for practical quantum technologies, as it provides a more relevant evaluation of quantum sensors and measurement techniques in real-world scenarios where the availability of large sample sizes is often limited.

In the context of Aadil Oufkir's thesis 22, we studied the sample complexity of several learning/testing problems for quantum channels. This includes learning Pauli channels 32, testing identity of quantum channels 15 and learning general channels with incoherent measurements 16.

Complex quantum networks require sharing entangled states between several parties and performing complicated entangled measurements in order to distribute entanglement across the network. As such measurements are complicated to perform and high quality entanglement is a rare resource, it is important to find applications for noisy or weakly entangled states and simple measurements. In 43, we considered the cryptographic communication primitive of secret sharing, and showed that quantum entanglement allows to improve the success rate of the task. While the maximum advantage is obtained with maximally entangled states, we show how to obtain this with only product measurements, which can easily be implemented, whereas previous schemes required highly entangled measurements. Moreover, we find that advantages can be obtained even with very weakly entangled states that are "unsteerable". We collaborated with experimentalists at the University of Lünd (Sweden) to implement our protocol.

Building on our previous work showing how one can certify causal indefiniteness in a "semi-device-independent" manner, we showed in 27 how this certification can be made stronger and transformed into a fully device-independent certification technique, in which no assumptions are made about the devices being used, other than the network structure describing how certain devices are connected. This shows that some important processes, like the quantum switch, can be certified in a device-indepndent manner, despite this being believed impossible until recently.

In 5, we addressed the problem of coding for classical multiple-access channels (MACs) with the assistance of non-signaling correlations between parties. It is well-known that non-signaling assistance does not change the capacity of classical point-to-point channels. However, it was recently observed that one can construct MACs from two-player non-local games while relating the winning probability of the game to the capacity of the MAC. By considering games for which entanglement (a special kind of non-signaling correlation) increases the winning probability (e.g., the Magic Square game), this shows that for some specific kinds of channels, entanglement between the senders can increase the capacity. Here, we show that the increase in capacity from non-signaling assistance goes beyond such special channels and applies even to a simple deterministic MAC: the binary adder channel. In particular, using non-signaling assistance, the sum-rate

We also undertook a similar study for the broadcast channel in 30.

In 24, we show that a quantum architecture with an error correction procedure limited to geometrically local operations incurs an overhead that grows with the system size, even if arbitrary error-free classical computation is allowed. In particular, we prove that in order to operate a quantum error correcting code in 2D at a logical error rate of

In 3, we introduce and study a fundamental trade-off which relates the amount by which noise reduces the accuracy of a quantum clock to the amount of information about the energy of the clock that leaks to the environment. Specifically, we consider an idealized scenario in which a party Alice prepares an initial pure state of the clock, allows the clock to evolve for a time that is not precisely known, and then transmits the clock through a noisy channel to a party Bob. Meanwhile, the environment (Eve) receives any information about the clock that is lost during transmission. We prove that Bob’s loss of quantum Fisher information about the elapsed time is equal to Eve’s gain of quantum Fisher information about a complementary energy parameter. We also prove a similar, but more general, trade-off that applies when Bob and Eve wish to estimate the values of parameters associated with two non-commuting observables. We derive the necessary and sufficient conditions for the accuracy of the clock to be unaffected by the noise, which form a subset of the Knill-Laflamme error-correction conditions. A state and its local time evolution direction, if they satisfy these conditions, are said to form a metrological code. We provide a scheme to construct metrological codes in the stabilizer formalism. We show that there are metrological codes that cannot be written as a quantum error-correcting code with similar distance in which the Hamiltonian acts as a logical operator, potentially offering new schemes for constructing states that do not lose any sensitivity upon application of a noisy channel. We discuss applications of the trade-off relation to sensing using a quantum many-body probe subject to erasure or amplitude-damping noise.

In 7 we analyse the canonical causally indefinite resource, the so-called quantum switch, from an energetic perspective. We show that a certain task – discriminating between different properties of noisy quantum operations – can be performed more efficiently (in terms of energetic requirements) than using a standard fixed-order protocol. We use this energetic approach to shed light on a recent debate about the status of proof-of-principle implementations of the quantum switch and simulations thereof. Our approach shows that (in the specific physical setting we consider) faithful implementations can be distinguished from naive "four-box" simulations, and perform better (for a fixed energy budget) than these simulations. This work opens a new path to explore the energetic advantages of causally indefinite protocols.

In 40, we study the possibility of using causally indefinite strategies to estimate parameters of channels in quantum metrology. Several recent works have claimed that the quantum switch can provide advantages in the metrological problem of estimating a parameter characterising a noisy quantum channel. However, we show that these works fail to properly compare their results against all possible standard metrological protocols, and that these advantages do not hold up to scrutiny. By developing convex optimisation techniques to compute the relevant metrological quantity, the "quantum Fisher information", we are able to optimise the performance of protocols using quantum control of causal order, and compare it to all protocols that have a fixed causal order. We find that, for certain classes of noisy channels, this resource can provide metrological advantages, but that it is necessary to consider protocols other than the quantum switch for this. A key ingredient for these advantages appears to be so-called dynamical quantum control, a property that we are studying in its own right in a future work in preparation.

In 19 we study the computational advantages of causal indefiniteness in query complexity problems using the framework of quantum supermaps. Using semi-definite programming approaches, we are able to calculate the exact query complexity of different types of computations for small input sizes (4-bit Boolean functions with 2 queries to the oracle): standard quantum circuits, circuits with quantum control of causal order, and more general causally indefinite supermaps. We find that, for certain functions, causally indefinite supermaps can provide an advantage in query complexity, uncovering a new computational advantage of causal indefiniteness that, in contrast to previously known advantages, is formulated in a more standard complexity-theoretic setting. However, we prove that the class of quantum circuits with quantum control of causal order is unable to improve upon standard quantum circuits in this query complexity setting. We are currently working to transform these results into asymptotic separations in complexity.

Entanglement is one of the primary quantum resources behind many quantum advantages, but many applications focus on bipartite entanglement shared between only two parties, in part due to the simplicity of this setting. In 18 we studied multipartite entanglement in the context of non-collaborative game theory, to understand how it can be used as a resource in tasks where there are conflicts of interest. We showed how quantum entanglement can lead to higher "social welfare" – a measure of the quality of a Nash equilibrium – than could be obtained with classical resources. Moreover, this setting allowed us to uncover surprising and nuanced differences between having direct access to quantum resources, and indirect access via some idealised black boxes. We use the technique of self-testing correlations in a novel way to show the separation between these types of quantum resource.

It is natural to expect a complete physical theory to have the ability to consistently model agents as physical systems of the theory. In 75, Frauchiger and Renner (FR) claim to show that when agents in quantum theory reason about each other's knowledge in a certain Wigner's friend scenario, they arrive at a logical contradiction. It was suggested that, among other things, that this posed a problem for computation as different agents would conclude different outcomes of a computation. In light of this, Renner often poses the challenge: provide a set of reasoning rules that can be used to program quantum computers that may act as agents, which are (a) logically consistent (b) generalise to arbitrary Wigner's friend scenarios (c) efficiently programmable and (d) consistent with the temporal order of the protocol. In 45 we develop a general framework where we show that every logical Wigner's friend scenario (LWFS) can be mapped to a single temporally ordered quantum circuit, which allows agents in any LWFS to reason in a way that meets all four criteria of the challenge. Importantly, our framework achieves this general resolution without modifying classical logic or unitary quantum evolution or the Born rule, while allowing agents' perspectives to be fundamentally subjective. We analyse the FR protocol in detail, showing how the apparent paradox is resolved there. We show that apparent logical contradictions in any LWFS only arise when ignoring the choice of Heisenberg cut in scenarios where this choice does matter, and taking this dependence into account will always resolve the apparent paradox. Our results establish that universal applicability of quantum theory does not pose any threat to multi-agent logical reasoning and we discuss the implications of these results for FR's claimed no-go theorem.

We developed in 28 a new framework which expresses constraints on the energy transfers between two or more arbitrary quantum systems. The constraints take the same form as the laws of thermodynamics, but are extended to any quantum system (whatever their size or initial state). On one hand, our framework identifies an effective temperature associated with the von Neumann entropy of a quantum state, and deduces from it the energy flows that are uncontrolled (i.e the heat). On the other hand, we identified the general type of resources in the quantum states that must be consumed to control energy flows (e.g. to reverse the direction of a heat flow as in a refrigerator, or to power an engine). The versatility of our framework allows us to explore limitations on the performances of quantum devices based on the systems used to implement them, as well as new directions to store and recover useful energy at the nanoscale.

In 28 we investigated direct proof-of-principle applications of the formalism to the smallest possible quantum machines based on two interacting qubits. A collaboration with the group of Benjamin Huard (ENS Lyon), an experimentalis specialized in superconducting circuits setups, is ongoing so as to test these results experimentally. In a collaboration with the group of Karyn le Hur (Ecole Polytechnique) 1, we have investigated another manifestation of our findings, which is the ability of the environment of a quantum system to store work rather than heat, providing new potential mechanism for efficient energy transfers.