During the last century, the industry of communications was devoted to improving human connectivity, leading to a seamless worldwide coverage to cope with increasing data rate demands and mobility requirements.
The Internet revolution drew on a robust and efficient multi-layer architecture ensuring end-to-end services. In a classical network architecture, the different protocol layers are compartmentalized and cannot easily interact. For instance, source coding is performed at the application layer while channel coding is performed at the physical (PHY) layer. This multi-layer architecture blocked any attempt to exploit low level cooperation mechanisms such as relaying, phy-layer network coding or joint estimation.
During the last decade, a major shift, often referred to as the Internet of Things (IoT), was initiated toward a machine-to-machine (M2M) communication paradigm, which is in sharp contrast with classical centralized network architectures. The IoT enables machine-based services exploiting a massive quantity of data virtually spread over a complex, redundant and distributed architecture.

This new paradigm makes the aforementioned classical network architecture based on a centralized approach out-of-date.

The era of
Internet of Everything

It is worth noting that working on these new architectures can be tackled from different perspectives, e.g. data management, protocol design, middleware, algorithmic design... Our main objective in Maracas is to address this problem from a communication theory perspective. Our background in communication theory includes information theory, estimation theory, learning and signal processing. Our strategy relies on three fundamental and complementary research axes:

While our expertise is mostly related to the optimization of wireless networks from a communication perspective, the project of Maracas is to broaden our scope in the context of Computing Networks, where a challenging issue is to optimize jointly architectures and applications, and to break the classical network/data processing separation.
This will drive us to change our initial positioning and to really think in terms of information-centric networks following, e.g. , , .

To summarize, Computing Networks can be described as highly distributed and dynamic systems, where information streams consist in a huge number of transient data flows from a huge number of nodes (sensors, routers, actuators, etc...) with computing capabilities at the nodes. These Computing Networks are nothing but the invisible nonetheless necessary skeleton of cloud and fog-computing based services.

Our research strategy is to describe these Computing Networks as complex large scale systems in an information theory framework, but in association with other tools, such as stochastic geometry, stochastic network calculus, game theory or machine learning.

The multi-user communication capability is a central feature, to be tackled in association with other concepts and to assess a large variety of constraints related to the data (storage, secrecy,...) or related to the network (energy, self-healing,...).

The information theory literature or more generally the communication theory literature is rich of appealing techniques dedicated to efficient multi-user communications: e.g. physical layer network coding, amplify-and-forward, full-duplexing, coded caching at the edge, superposition coding. But despite their promising performance, none of these technologies play a central role in current protocols. The reasons are two-fold : i) these techniques are usually studied in an oversimplified theoretical framework which neglect many practical aspects (feedback, quantization,...), and that is not able to tackle large scale networks and ii) the proposed algorithms are of a high complexity and are not compatible with the classical multi-layer network architecture.

Maracas addresses these questions, leveraging on its past outstanding experience from wireless network design.

The aim of Maracas is to push from theory to practice a fully cross-layer design of
Computing Networks

As such, Maracas project goes much beyond wireless networks. The Computing Networks paradigm applies to a wide variety or architectures including wired networks, smart grids, nanotechnology based networks. One Maracas research axis will be devoted to the identification of new research topics or scenarios where our algorithms and mathematical models could be useful.

As presented in the first section, Computing Networks is a concept generalizing the study of multi-user systems under the communication perspective. This problematic is partly addressed in the aforementioned references.
Optimizing Computing Networks relies on exploiting simultaneously multi-user communication capabilities, in the one hand, and storage and computing resources in the other hand.
Such optimization needs to cope with various constraints such as energy efficiency or energy harvesting, delays, reliability or network load.

The notion of reliability (used in MARACAS acronym) is central when considered in the most general sense : ultimately, the reliability of a Computing Network measures its capability to perform its intended role under some confidence interval. Figure represents the most important performance criteria to be considered to achieve reliable communications. These metrics fit with those considered in 5G and beyond technologies .

On the theoretical side, multi-user information theory is a keystone element. It is worth noting that classical information theory focuses on the power-bandwith tradeoff usually referred as Energy Efficiency-Spectral Efficiency (EE-SE) tradeoff (green arrow on ). However, the other constraints can be efficiently introduced by using a non-asymptotic formulation of the fundamental limits , and in association with other tools devoted to the analysis of random processes (queuing theory, ...).

Maracas aims at studying
Computing Networks

In particular, Maracas combines techniques from communication and information theory with statistical signal processing, control theory, and game theory. Wireless networks is the emblematic application for Maracas, but other scenarios are appealing for us, such as molecular communications, smart grids or smart buildings.

Several teams at Inria are addressing computing networks, but working on this problem with an emphasis on communication aspects is unique within Inria.

The complexity of Computing Networks comes first from the high dimensionality of the problem: i) thousands of nodes, each with up to tens setting parameters and ii) tens variable objective functions to be minimized/maximized.

In addition, the necessary decentralization of the decision process, the non stationary behavior of the network itself (mobility, ON/OFF Switching) and of the data flows, and the necessary reduction of costly feedback and signaling (channel estimation, topology discovering, medium access policies...) are additional features that increase the problem complexity.

The original positioning of Maracas holds in his capability to address three complementary challenges :

Our research is organized in 4 research axes:

Axis 1 - Fundamental Limits of Reliable Communication Systems:
Information theory is revisited to integrate reliability in the wide sense. The non-asymptotic theory which made progress recently and attracted a lot of interest in the information theory community is a good starting point. But for addressing computing network in a wide sense, it is necessary to go back to the foundation of communication theory and to derive new results, e.g. for non Gaussian channels of for multi-constrained systems .

This also means revisiting the fundamental estimation-detection problem in a general multi-criteria, multi-user framework to derive tractable and meaningful bounds.

As mentioned in the introduction, Computing Networks also relies on a data-centric vision, where transmission, storage and processing are jointly optimized. The strategy of caching at the edge proposed for cellular networks shows the high potential of considering simultaneously data and network properties. Maracas is willing to extend his skills on source coding aspects to tackle with a data-oriented modeling of Computing Networks.

Axis 2 - Algorithms and protocols:
Our second objective is to elaborate new algorithms and protocols able to achieve or at least to approach the aforementioned fundamental limits.
While the exploration of fundamental limits is helpful to determine the most promising strategies (e.g. relaying, cooperation, interference alignment) to increase system performance, the transformation of these degrees of freedom into real protocols is a non trivial issue.
One reason is the exponentially growing complexity of multi-user communication strategies, with the number of users, due to the necessity of some coordination, feedback and signaling.
The general problem is a decentralized and dynamic multi-agents multi-criteria optimization problem and the general formulation is a non-linear and non-convex large scale problem.

The conventional research direction aims at reducing the complexity by relaxing some constraints or by reducing the number of degrees of freedom. For instance, topology interference management is a seducing model used to reduce feedback needs in decentralized wireless networks leading to original and efficient algorithms , .

Axis 3 - Experimental validation :
With the rapid evolution of network technologies, and their increasing complexity, experimental validation is necessary for two reasons: to get data, and to validate new algorithms on real systems.

Maracas activity leverages on the FIT/CorteXlab platform (, and our strong partnerships with leading industry including Nokia Bell Labs, Orange labs, Sigfox or Sequans. Beyond the platform itself which offers a worldwide unique and remotely accessible testbed , Maracas also develops original experimentations exploiting the reproducibility, the remote accessibility, and the deployment facilities to produce original results at the interface of academic and industrial research , . FIT/CorteXlab uses the GNU Radio environment to evaluate new multi-user communication systems.

Our experimental work is developed in collaboration with other Inria teams especially in the Rhone-Alpes centre but also in the context of the future SILECS project hich will implement the convergence between FIT and Grid'5000 infrastructures in France, in cooperation with European partners and infrastructures. SILECS is a unique framework which will allow us to test our algorithms, to generate data, as required to develop a data-centric approach for computing networks.

Last but not least, software radio technologies are leaving the confidentiality of research laboratories and are made available to a wide public market with cheap (few euros) programmable equipment, allowing to setup non standard radio systems. The existence of home-made and non official radio systems with legacy ones could prejudice the deployment of Internet of things. Developing efficient algorithms able to detect, analyse and control the spectrum usage is an important issue. Our research on FIT/CorteXlab will contribute to this know-how.

Axis 4 - Other application fields :
Even if the wireless network context is still challenging and provides interesting problems, Maracas targets to broaden its exploratory playground from an application perspective. We are looking for new communication systems, or simply other multi-user decentralized systems, for which the theory developed in the context of wireless networks can be useful.
Basically, Maracas might address any problem where multi-agents are trying to optimize their common behavior and where the communication performance is critical (e.g. vehicular communications, multi-robots systems, cyberphysical systems).
Following this objective, we already studied the problem of missing data recovery in smart grids and the original paradigm of molecular communications .

Of course, the objective of this axis is not to address random topics but to exploit our scientific background on new problems, in collaboration with other academic teams or industry. This is a winning strategy to develop new partnerships, in collaboration with other Inria teams.

The fifth generation (5G) broadens the usage of cellular networks but requires new features, typically very high rates, high reliability, ultra low latency, for immersive applications, tactile internet, M2M communications.

From the technical side, new elements such as millimeter waves, massive MIMO, massive access are under evaluation. The initial 5G standard validated in 2019, is finally not really disruptive with respect to the 4G and the clear breakthrough is not there yet. The ideal network architecture for billions of devices in the general context of Internet of Things, is not well established and the debate still exists between several proposals such as NB-IoT, Sigfox, Lora. We are developing a deep understanding of these techniques, in collaboration with major actors (Orange Labs, Nokia Bell Labs, Sequans, Sigfox) and we want to be able to evaluate, to compare and to propose evolutions of these standards with an independent point of view.

This is why we are interested in developing partnerships with major industries, access providers but also with service providers to position our research in a joint optimization of the network infrastructure and the data services, from a theoretical perspective as well as from experimentation.

The energy footprint and from a more general perspective, the sustainability of wireless cellular networks and wireless connectivity is somehow questionable.

We develop our models and analysis with a careful consideration of the energy footprint : sleeping modes, power adaptation, interference reduction, energy gathering, ... many techniques can be optimized to reduce the energetic impact of wireless connectivity. In a computing networks approach, considering simultaneously transmission, storage and computation constraints may help to reduce drastically the overall energy footprint.

Smart environments rely on the deployment of many sensors and actuators allowing to create interactions between the twinned virtual and real worlds. These smart environments (e.g. smart building) are for us an ideal playground to develop new models based on information theory and estimation theory to optimize the network architecture including storage, transmission, computation at the right place.

Our work can be seen as the invisible side of cloud/edge computing. In collaboration with other teams expert in distributed computing or middleware (typically at CITIlab, with the team Dynamid of Frédéric Le Mouel) and in the framework of the chaire SPIE/ICS-INSA Lyon, we want to optimize the mechanisms associated to these technologies : in a multi-constrained approach, we want to design new distributed algorithms appropriate for large scale smart environments.

From a larger perspective we are interested on various applications where the communication aspects play an important role in multi-agent systems and target to process large sets of data. Our contribution to the development of TousAntiCovid falls into this area.

During the first 6G wireless meeting which was held in Lapland, Finland in March 2019, machine learning (ML) was clearly identified as one of the most promising breakthroughs for future 6G wireless systems expected to be in use around 2030 (). The research community is entirely leveraging the international ML tsunami. We strongly believe that the paradigm of wireless networks is moving toward to a new era. Our view is supported by the fact that artificial Intelligence (AI) in wireless communications is not new at all. The telecommunications industry has been seeking for 20 years to reduce the operational complexity of communication networks in order to simplify constraints and to reduce costs on deployments. This obviously relies on data-driven techniques allowing the network to self-tune its own parameters. Over the successive 3GPP standard releases, more and more sophisticated network control has been introduced. This has supported increasing flexibility and further self-optimization capabilities for radio resource management (RRM) as well as for network parameters optimization.

We target the following key elements :

Many communication mechanisms are based on acoustic or electromagnetic propagation; however, the general theory of communication is much more widely applicable. One recent proposal is molecular communication, where information is encoded in the type, quantity, or time or release of molecules. This perspective has interesting implications for the understanding of biochemical processes and also chemical-based communication where other signaling schemes are not easy to use (e.g., in mines). Our work in this area focuses on two aspects: (i) the fundamental limits of communication (i.e., how much data can be transmitted within a given period of time); and (ii) signal processing strategies which can be implemented by circuits built from chemical reaction-diffusion systems.

A novel perspective introduced within our work is the incorporation of coexistence constraints. That is, we consider molecular communication in a crowded biochemical environment where communication should not impact pre-existing behavior of the environment. This has lead to new connections with communication subject to security constraints as well as the stability theory of stochastic chemical reaction-diffusion systems and systems of partial differential equations which provide deterministic approximations.

Considering our research activities, most of our works are based on theoretical works or simulations. We may be concerned with the following aspects :

Our research may impact the energy consumption of the digital world even if the current debate on 5G is ill-posed. It is worth that the rebound effect associated to any technology should be thought carrefully.

Typially, the desing of former wireless protocols focused on high rates and high quality of service, with a lack of considering energy and CO

In the future, we will contribute to better understanding large scale impact of new communication technologies, and to investigate how innovation can help reducing the energy footprint, and may help to build a greener world.

Dadja Anade, Hassan Kallam and Cyrille Morin successfully defended their PhD.

We explored new side-road studies in axis 4.

The cooperation with Nokia Bell Labs is strengthen with one more PhD starting.

We achieved new results around molecular communications.

We evaluate different algorithmic solutions in multi-user communication scenarios in GNUradio and we propose standard implementations on CorteXlab. Both contributions are discussed in the following sections.

FIT (Future Internet of Things) was a french Equipex (Équipement d'excellence) built to develop an experimental facility, a federated and competitive infrastructure with international visibility and a broad panel of customers. FIT is composed of four main parts: a Network Operations Center (FIT NOC), a set of IoT test-beds (FIT IoT-Lab), a set of wireless test-beds (FIT-Wireless) which includes the FIT/CorteXlab platform managed by Maracas team, and finally a set of Cloud test-beds (FIT-Cloud). In 2014 the construction of the room was done and SDR nodes have been installed in the room: 42 industrial PCs (Aplus Nuvo-3000E/P), 22 NI radio boards (usrp ) and 18 Nutaq boards (PicoSDR, 2x2 and 4X4) can be programmed remotely, from internet now.

As the FIT project development phase ended in 2019 , CorteXlab has seen continued usage as well as further developments. FIT/CorteXlab has been used by both INSA and the European GNU Radio Days () for both lectures and tutorials. Several scientific measurements campaigns have taken place in the FIT/CorteXlab experimentation room and are under works at the moment.

In spite of the global Covid pandemic, the years of 2020 and 2021 have seen several key developments in CorteXlab:

In the coming years, we will pursue the following objectives for CorteXlab:

We proposed the use of CorteXlab in the framework of the PEPR 5G and we expect to collaborate in the future SNS-IA program of Horizon Europe. More specifically, CorteXlab can be used to explore machine learning based radio, resource management and also tu evaluate joint sensing and communication concepts for twinning worlds.

In 2021, as for all of us, many collaborative projects have been delayed due to the pandemia, conferences held remotely, and many experimental activites have been postponed. For instance, the implementation of our two PhC European fundings, one with Serbia and the other with Austria have been delayed.

Let us first give a global overview of our research results, along three main lines : fundamental results, multi-user network scenarios and cross-roads exploration.

Many applied problems in communication can be evaluated under the light of applied probabilities, at the root of information theory and estimation theory. A part of our work in axis 1 contributes in this area. In 2021 we obtained three important results

Multi-user network scenarios constitute the natural playground for our research, for instance considering an isolated cell in the downlink, or a LORA rando access topology, etc... For each scenario, we explore it under three perspectives: fundamental limits (axis 1), efficient algorithms (axis 2) and experimental evaluation (axis 3). Before going deeper in these contributions, we herein summarize the scenario studied this year

Point-to-point transmission: despite its long history and well-established results, the basic scenario made of one transmitter and one receiver still represents challenges: Typically, to adapt the transmission to new propagation channels (e.g. THz, VLC) with non classical properties, but also to support new QoS requirements (URLLC), which occurs especially in the context of machine to machine communications. Last but not least, revisiting the P2P communication problem under the light of machine learning may bring new breakthrough in terms of encoder/decoder complexity and adaptability.

Our contributions this year concern first the evaluation of the capacity for a channel with additive multivariate α-stable noise . We also studied the second-order capacity of multi-delay constrained transmission of short packets , (axis 1). Lastly, in collaboration wih Jakob Hoydis and Fayçal Aoudia (Nokia), we elaborated deep learning strategies , , (axis 2) at the PHY layer.

The objective of axis 4 is either to explore new setups not directly related to the wireless context, or to explore new techniques or models that could be useful for wireless. This year we continued to investigate on :

Most fundamental results in information theory and communication theory rely on applied probability. In the context of multi-user networks with new QoS metrics such as ultra reliable and low latency communications (URLLC), evaluating fundamental limits and properties of decentralized networks is an open problem. To make progress in this field, it is necessary to better understand some fundamental properties in applied probability.

In this context, we made progress in 2021 in three directions :

The calculation of cumulative distribution functions (CDFs) of sums of random vectors is omnipresent in the realm of information theory. For instance, the joint decoding error probability in multi-user channels often boils down to the calculation of CDFs of random vectors. In the case of the memoryless Gaussian multiple access channel, under certain conditions on the channel inputs, the dependence testing bound corresponds to the CDF of a sum of independent and identically distributed (IID) random vectors. Unfortunately, the calculation of CDFs of random vectors requires elaborated numerical methods which often lead to unknown errors. From this perspective, approximations to these CDFs, e.g., Gaussian approximations and saddlepoint approximations have gained remarkable popularity. In the case of Gaussian approximations, multi-dimensional Berry-Esseen-type theorems provide upper bounds on the approximation errors. These bounds are particularly precise around the mean. Alternatively saddlepoint approximations are known to be more precise than Gaussian approximations far apart from the mean. Unfortunately, this claim is often justified only by numerical analysis as formal upper bounds on the error induced by saddlepoint approximations are rather inexistent. The PhD of Dadja Anade, co-supervised by Jean-Marie Gorce, Philippe Mary (INSA Rennes) and Samir Perlaza (Inria, NEO, Sophia Antipolis), contributed in this direction by introducing a real-valued function that approximates the CDF of a finite sum of real-valued IID random vectors. Both Gaussian and saddlepoint approximations are shown to be special cases of the proposed approximation, which is referred to as the exponentially tilted Gaussian approximation. The approximation error is upper bounded and both upper and lower bounds on the CDF are obtained , , .

In heavy-tailed data, such as data drawn from regularly varying models, extreme values can occur relatively often. As a consequence, in the context of hypothesis testing, extreme values can provide valuable information in identifying dependence between two data sets. In this paper, the error exponent of a dependence test is studied when only processed data recording whether or not the value of the data exceeds a given value is available. An asymptotic approximation of the error exponent is obtained, establishing a link with the upper tail dependence, which is a key quantity in extreme value theory. While the upper tail dependence has been well characterized for elliptically distributed models, much less is known in the non-elliptical setting. To this end, a family of nonelliptical distributions with regularly varying tails arising from shot noise is studied, and an analytical expression for the upper tail dependence derived .

Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for non-smooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As training most popular deep neural networks corresponds to optimizing nonsmooth and nonconvex objectives, there is a pressing need for such convergence guarantees. In this work, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared model is asynchronously updated by concurrent processes. To this end, we use a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes. Under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From a practical perspective, one issue with the family of algorithms that we consider is that they are not efficiently supported by machine learning frameworks, which mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks for a partitioned but shared model in single- and multi-GPU settings. Based on this implementation, we achieve on average about 1.2x speed-up in comparison to state-of-the-art training methods for popular image classification tasks, without compromising accuracy .

A wide range of communication systems are corrupted by non-Gaussian noise, ranging from wireless to power line. In some cases, including interference in uncoordinated OFDMbased wireless networks, the noise is both impulsive and multivariate. At present, little is known about the information capacity and corresponding optimal input distributions. In this paper, we derive upper and lower bounds of the information capacity by exploiting non-isotropic inputs. For the special case of sub-Gaussian α-stable noise models, a numerical study reveals that isotropic Gaussian inputs can remain a viable choice, although the performance depends heavily on the dependence structure of the noise .

A standard assumption in the design of ultra-reliable low-latency communication systems is that the duration between message arrivals is larger than the number of channel uses before the decoding deadline. Nevertheless, this assumption fails when messages rapidly arrive and reliability constraints require that the number of channel uses exceeds the time between arrivals. In this paper, we study channel coding in this setting by jointly encoding messages as they arrive while decoding the messages separately, allowing for heterogeneous decoding deadlines. For a scheme based on power sharing, we analyze the probability of error in the finite blocklength regime. We show that significant performance improvements can be obtained for short packets by using our scheme instead of standard approaches based on time sharing , .

Superposition coding (SC) has been known to be capacity achieving for the Gaussian memoryless broadcast channel for more than 30 years. However, SC regained interest in the context of non orthogonal multiple access (NOMA) in 5G. From an information theory point of view, SC is capacity achieving in the broadcast Gaussian channel, even when the number of users tends to infinity. But using SC has two drawbacks: decoders complexity increases drastically with the number of simultaneous receivers, and the latency is unbounded since SC is optimal only in the asymptotic regime. To evaluate these effects quantitatively in terms of fundamental limits, we introduce a finite time transmission constraint imposed at the base station and we evaluate fundamental trade-offs between the maximal number of superposed users, the coding block-length and the block error probability. The energy efficiency loss due to these constraints is evaluated analytically and by simulation. Orthogonal sharing appears to outperform SC for hard delay constraints (equivalent to short block-length) and in low spectral efficiency regime (below one bit per channel use). These results are obtained by the association of stochastic geometry and finite block-length information theory .

We analyse the multiplexing gain (MG) achievable over Wyner’s symmetric network with random user activity and random arrival of mixed-delay traffic. The mixed-delay traffic is composed of delay-tolerant traffic and delay-sensitive traffic where only the former can benefit from transmitter and receiver cooperation since the latter is subject to stringent decoding delays. The total number of cooperation rounds at transmitter and receiver sides is limited to D rounds. We derive inner and outer bounds on the MG region. In the limit as

Machine learning (ML) starts to be widely used to enhance the performance of wireless transmissions. However, it is still unclear if such methods are truly competitive with respect to conventional methods in realistic scenarios and under practical constraints. These constraints are adaptability, complexity, but also signal processing constraints such as PAPR (Peak to Amplitude Power Ratio) or ACLR (Adjacent Channel Leakage Ratio). Our studies are described below:

An attractive research direction for future communication systems is the design of new waveforms that can both support high throughputs and present advantageous signal characteristics. Although most modern systems use orthogonal frequency-division multiplexing (OFDM) for its efficient equalization, this waveform suffers from multiple limitations such as a high adjacent channel leakage ratio (ACLR) and high peak-to-average power ratio (PAPR). In this paper, we propose a learning-based method to design OFDM-based waveforms that satisfy selected constraints while maximizing an achievable information rate. To that aim, we model the transmitter and the receiver as convolutional neural networks (CNNs) that respectively implement a high-dimensional modulation scheme and perform the detection of the transmitted bits. This leads to an optimization problem that is solved using the augmented Lagrangian method. Evaluation results show that the end-to-end system is able to satisfy target PAPR and ACLR constraints and allows significant throughput gains compared to a tone reservation (TR) baseline. An additional advantage is that no dedicated pilots are needed , .

In addition to enabling accurate signal reconstruction on realistic channel models, MU-MIMO receive algorithms must allow for easy adaptation to a varying number of users without the need for retraining. In contrast to existing work, we propose an ML-enhanced MU-MIMO receiver that builds on top of a conventional linear minimum mean squared error (LMMSE) architecture. It preserves the interpretability and scalability of the LMMSE receiver, while improving its accuracy in two ways. First, convolutional neural networks (CNNs) are used to compute an approximation of the second-order statistics of the channel estimation error which are required for accurate equalization. Second, a CNN-based demapper jointly processes a large number of orthogonal frequency-division multiplexing (OFDM) symbols and subcarriers, which allows it to compute better log likelihood ratios (LLRs) by compensating for channel aging. The resulting architecture can be used in the up- and downlink and is trained in an end-to-end manner, removing the need for hard-to-get perfect channel state information (CSI) during the training phase. Simulation results demonstrate consistent performance improvements over the baseline which are especially pronounced in high mobility scenarios , .

The user clustering problem in an uplink MIMO Non-Orthogonal Multiple Access (NOMA) scheme is considered here. The receiver is assumed to operate in two sequential stages that employ Linear Minimum Mean Squared Error (LMMSE) receivers. At the first stage, the receiver is designed to recover the transmission from a cluster of selected users/nodes. The contribution of these users is then subtracted from the received signal and the remaining user transmissions are then linearly recovered. The determination of which users should be detected during the first stage is formulated as a deep learning based multiple classification problem. In order to guarantee that the selection is robust to fast fading, the input to the neural network is based on second order channel statistics. Furthermore, the training process is simplified by using a large system approximation of the resulting sum-rates. Simulation results indicate that the proposed deep learning-based solution is able to achieve a significant rate advantage with respect to other lazy approaches, such as fixed or random cluster assignments

Multiple access channel (MAC) is the theoretical basis for NOMA, that is a key technology for future cellular communications, expected to increase the number of devices served with a limited spectrum and to facilitate low latency communications. Yet the non-orthogonal operation leads to challenging interference conditions that require dedicated solutions, and this is particularly true for the constellations used to encode transmitted messages. The work developed in the last chapter of Cyrille Morin 's PhD uses deep learning to learn constellations tailored to the two-user MAC Gaussian channel, allowing for better performance than previously designed constellations, or the traditional orthogonal approach, stepping beyond the time-division multiple access (TDMA) capacity region. In the process, it also showcases the importance of performance comparison between orthogonal and non orthogonal approaches, and of studying the trade-offs between the two users, instead of focusing on aggregated metricsv.

A classical setup in NOMA massive access for IoT is that the nodes may request to transmit a packet at any time. A pure random access may leed to collisions or interference, which can be taken into account by developping robust detection, distributed optimization strategies or efficient distributed coding . Further, reducing signaling is of primary interest since in IoT, information to be sent may be small. In this general configuration we contributed to five studies as described below: Note that this setup is also the reference scenario chose to explore the use of Quantum algoithms (see section )

In this work, we develop algorithms for sensor identification and channel estimation in narrowband communication systems which is a necessary step in a standard NB-IoT protocol for instance. In our model we integrate a fault probability, which depends on physical variables and may be statistically correlated. The first step is to introduce a statistical model relating observations at the access point to the channel, activity of each sensor, and the probability each machine is faulty. Based on our new model, we derive an identification and channel estimation algorithm by exploiting GAMP by developing a loopy BP (LBP) algorithm for the model, and then applying GAMP for the variables associated with the communication channel. A key feature of the algorithm is that it explicitly accounts for uncertainty and correlation in the probability sensors are active, as opposed to existing approaches where the activity probability is fixed and sensor transmissions are uncorrelated. In addition, our model accounts for the impact of physical variables (such as temperature) on the probability of a fault. We model the probability of a fault conditioned on temperature observations at the access point via the beta distribution, a highly flexible family of models. As such, we call the algorithm β -HGAMP. Numerical results demonstrate that β -HGAMP outperforms existing algorithms based on GAMP and GS-HGAMP .

For a range of scenarios arising in sensor networks, control and edge computing, communication is event-triggered; that is, in response to the environment of the communicating devices. A key feature of device activity in this setting is correlation, which is particularly relevant for sensing of physical phenomena such as earthquakes or flooding. Such correlation introduces a new challenge in the design of resource allocation and scheduling for random access that aim to maximize throughput or expected sum-rate, which do not admit a closed-form expression. In this work, we develop stochastic resource optimization algorithms to design a random access scheme that provably converge with probability one to locally optimal solutions of the throughput and the sum-rate. A key feature of the stochastic optimization algorithm is that the number of parameters that need to be estimated grows at most linearly in the number of devices. We show via simulations that our algorithms outperform existing approaches in terms of the expected sum-rate by up to

This scenario is not exactly a grant-free access since a signaling channel exists. However, the protocol is not known in advance and we let the nodes learn the protocol through a learning phase based on decentralized reinforcement learning. We thus ropose a new framework, exploiting the multi-agent deep deterministic policy gradient (MADDPG) algorithm, to enable a base station (BS) and user equipment (UE) to come up with a medium access control (MAC) protocol in a multiple access scenario. In this framework, the BS and UEs are reinforcement learning (RL) agents that need to learn to cooperate in order to deliver data. The network nodes can exchange control messages to collaborate and deliver data across the network, but without any prior agreement on the meaning of the control messages. In such a framework, the agents have to learn not only the channel access policy, but also the signaling policy. The collaboration between agents is shown to be important, by comparing the proposed algorithm to ablated versions where either the communication between agents or the central critic is removed. The comparison with a contention-free baseline shows that our framework achieves a superior performance in terms of goodput and can effectively be used to learn a new protocol .

In axis 1, we studied the capacity of channels with additive impulsive noise. In axis 3, we conducted experimentation to show evidence that this model may be relevant in random access networks. Here, we developed a new receiver adapted to the characteristics of such channel. We focus on an unsupervised estimation of an approximation of the log-likelihood ratio in the finite block length regime with unknown noise distribution. We first analyze what conditions lead to estimation failure, derive an analytical tool to assess the failure probability, and then propose to jointly use two mechanisms that significantly reduce the estimation errors. Our estimation is shown to be efficient and the proposed receiver exhibits a near-optimal performance under various noisy environment types, ranging from very impulsive to Gaussian. We also show that short LDPC codes do not achieve the performance bound predicted by the finite block length analysis. This advocate for further investigations in short block length channel coding research .

The work described in this section is complementary to the work described in section and gives the scientific rational of the software developed in Maracas.

CorteXlab provides a unique worldwide testbed with free access to develop and tests new radio waveforms and MAC protocols without any standard limitation. The hardware is the unique limitation. Everything is software after I/Q signals processing. Our objetive is to develop collaborations to position CorteXlab as a necessary passage for reproducible research, especially with the growing interest in machine learning based approaches.

The contributions of the year are :

We developed a complete GNU Radio, dynamic and customizable physical (PHY) layer for long range (LoRa) transceiver, usable with the FIT/CorteXlab radio testbed and derived from the original EPFL LoRa implementation. The created adaptation, through a standardized interface, allows end-users an easy connection to an external medium access control (MAC)/upper layer to experiment scenarios in a fully reproducible and isolated environment. It also provides several PHY layer key performance indicators and metrics such as signal to noise ratio (SNR), received signal energy, binary error rate (BER) and other, that can be used to gauge the performance of the ongoing communications as well as construct MAC layers able to use this information. Finally, the interface allows our plug&play PHY solution to be used with any existing or newly adapted MAC layer, without having to implement it in GNU Radio . A new software (Cortexlab_LORA_PHY) has been built.

With the internship of Mohamad Gritli (supervised by M. Egan and D. Gesbert), we evaluated the transmission pattern in uncoordinated IoT networks on CorteXlab.

Despite the interest of CorteXlab, it may be very useful to also try and test real environments. We try to conduct such work in collaboration with Aalborg Unversity and IMT Lille-Douai . In IoT, as not all devices are coordinated, there are limited opportunities to mitigate interference. As such, it is crucial to characterize the interference in order to understand its impact on coding, waveform and receiver design. While a number of theoretical models have been developed for the interference statistics in communications for the IoT, there is very little experimental validation. In this work, we addressed this key gap in understanding by performing statistical analysis on recent measurements in the unlicensed 863 MHz to 870 MHz band in different regions of Aalborg, Denmark. In particular, we show that the measurement data suggests the distribution of the interference power is heavy tailed, confirming predictions from theoretical models which confirm our working assumptions (see related contributions in axes 1 and 2).

In this open-mind research area, we developed two research axes. Both of them have been founded by research exploratory actions form Inria.

This research was partly funded with an exploratory action that ended in 2020, and founded the postdoc of Bayram Akdeniz. The publications of the year extended the former work as follows:

Chemical reactions and diffusion are two basic mechanisms governing the dynamics of molecules in a fluid. As such, they play a critical role in molecular communication for channel modeling, design of detection rules, implementation of molecular circuits for computation, and modeling interactions with external biochemical systems. For finite numbers of information-carrying molecules, stochastic models naturally arise with the simplest example given by the Wiener process, often known as Brownian motion. Nevertheless, the Wiener process fails to be accurate when external forces, friction, and chemical reactions are present. Recently, there have been several contributions that tailor molecular communication systems to these more challenging channel conditions. In this paper, we first overview a general family of stochastic models of reaction and diffusion systems, including both Langevin diffusion and the reaction-diffusion master equation. These models form a basis for the use of these models as molecular communication channels, from which modulation and detection schemes can be developed. We survey recent results on the design of these schemes, with a focus on a recently developed approach which is robust to a wide range of channel models, known as equilibrium signaling. We then turn to the implementation of these detection schemes and related parameter estimation problems via stochastic molecular circuits, based on stochastic chemical reaction networks. Finally, interactions between molecular communication systems and stochastic biological systems as well as open problems are discussed. Our overarching goal is to highlight how the consideration of general stochastic models of reaction and diffusion can be utilized in order to widen the application of molecular communications both within engineered systems, and also as motivation for advances in the mathematical characterization of these models.

A basic problem in molecular biology is to estimate equilibrium states of biochemical processes. To this end, advanced spectroscopy methods have been developed in order to estimate chemical concentrations in situ or in vivo. However, such spectroscopy methods can require special conditions that do not allow direct observation of the biochemical process. A natural means of resolving this problem is to transmit chemical signals to another location within a lab-on-a-chip device; that is, employing molecular communication in order to perform spectroscopy in a different location. In this paper, we develop such a signaling strategy and estimation algorithms for equilibrium states of a biochemical process. In two biologically-inspired models, we then study via simulation the tradeoff between the rate of obtaining spectroscopy measurements and the estimation error, providing insights into requirements of spectroscopy devices for high throughput biological assays .

Complex fluid media where molecules are susceptible to forces due, for example, to external magnetic fields, complicates the design of molecular communication systems. In particular, the equations governing the motion of each molecule in time do not typically admit tractable solutions, which makes receiver design challenging for standard communication schemes; e.g., based on concentration shift keying. In this paper, a new communication scheme is proposed, which leads to simple expressions for receiver statistics, even when spatially inhomogeneous diffusion and external forces are present. The proposed scheme exploits the equilibrium statistics of the system, which arise in a wide range of scenarios. This approach is illustrated in a bounded system with inhomogeneous diffusion and external forces determined by a quadratic potential .

The field of quantum information science may revolutionize numerous information processing applications. Indeed, quantum information may lead to fast algorithms for complex computational problems and may help to design new communication protocols. Our research direction is to evaluate the potential of quantum information in the core of wireless communication protocols.

We started by exploring the use of quantum algorithms in the grant-free massive access in IoT explored in axis 2.

To support multiple transmissions in an optical fiber, several techniques have been studied such as Optical Code Division Multiple Access (OCDMA). In particular, the incoherent OCDMA systems are appreciated for their simplicity and reduced cost. However, they suffer from Multiple Access Interference (MAI), which degrades the performances. In order to cope with this MAI, several detectors have been studied. Among them, the Maximum Likelihood (ML) detector is the optimal one but it suffers from high complexity as all possibilities have to be tested prior to decision. However, thanks to the recent quantum computing advances, the complexity problem can be circumvented. Indeed, quantum algorithms, such as Grover, exploit the superposition states in the quantum domain to accelerate the computation. Thus, in this paper, we propose to adapt the quantum Grover's algorithm in the context of MUD, in an OCDMA system using non-orthogonal codes. We propose a way to adapt the received noisy signal to the constraints defined by Grover's algorithm. We further evaluate the probability of success in detecting the active users for different noise levels. Aside from the complexity reduction, simulations show that our proposal has a high probability of detection when the received signal is not highly altered. We show the benefits of our proposal compared to the classical and the optimal ML detector .

A key problem for many industrial processes is to limit exposure to system malfunction. However, it is often the case that control cost minimization is prioritized over model identification. Indeed, model identification is typically not considered in production optimization, which can lead to delayed awareness and alerting of malfunction. In this paper, we address the problem of simultaneous production optimization and system identification. We develop new algorithms based on modifier adaptation and reinforcement learning, which efficiently manage the tradeoff between cost minimization and identification. For two case studies based on a chemical reactor and subsea oil and gas exploration, we show that our algorithms yield control costs comparable to existing methods while yielding rapid identification of system degradation .