It is forecast that the vast majority of Internet connections will be wireless. The EVA project grasps this opportunity and focuses on wireless communication. EVA tackles challenges related to providing efficient communication in wireless networks and, more generally, in all networks that are not already organized when set up, and consequently need to evolve and spontaneously find a match between application requirements and the environment. These networks can use opportunistic and/or collaborative communication schemes. They can evolve through optimization and self-learning techniques. Every effort is made to ensure that the results provided by EVA have the greatest possible impact, for example through standardization and other transfer activities. The miniaturization and ubiquitous nature of computing devices has opened the way to the deployment of a new generation of wireless (sensor) networks. These networks are central to the work in EVA, as EVA focuses on such crucial issues as power conservation, connectivity, determinism, reliability and latency. Wireless Sensor Network (WSN) deployments are also to be a new key subject, especially for emergency situations (e.g. after a disaster). Industrial process automation and environmental monitoring are considered in greater depth.
Designing Tomorrow's Internet of (Important) Things
Inria-EVA is a leading research team in low-power wireless communications. The team pushes the limits of low-power wireless mesh networking by applying them to critical applications such as industrial control loops, with harsh reliability, scalability, security and energy constraints. Grounded in real-world use cases and experimentation, EVA co-chairs the IETF 6TiSCH and LAKE standardization working groups, co-leads Berkeley's OpenWSN project and works extensively with Analog Devices' SmartMesh IP networks. Inria-EVA is the birthplace of the Wattson Elements startup and the Falco solution. The team is associated with Prof. Pister's team at UC Berkeley through the SWARM associate research team.
We study how advanced physical layers can be used in low-power wireless networks. For instance, collaborative techniques such as multiple antennas (e.g. Massive MIMO technology) can improve communication efficiency. The core idea is to use massive network densification by drastically increasing the number of sensors in a given area in a Time Division Duplex (TDD) mode with time reversal. The first period allows the sensors to estimate the channel state and, after time reversal, the second period is to transmit the data sensed. Other techniques, such as interference cancellation, are also possible.
Medium sharing in wireless systems has received substantial attention throughout the last decade. The Inria team HiPERCOM2 has provided models to compare TDMA and CSMA. HiPERCOM2 has also studied how network nodes must be positioned to optimize the global throughput.
EVA pursues modeling tasks to compare access protocols, including multi-carrier access, adaptive CSMA (particularly in VANETs), as well as directional and multiple antennas. There is a strong need for determinism in industrial networks. The EVA team focuses particularly on scheduled medium access in the context of deterministic industrial networks; this involves optimizing the joint time slot and channel assignment. Distributed approaches are considered, and the EVA team determines their limits in terms of reliability, latency and throughput. Furthermore, adaptivity to application or environment changes are taken into account.
Wireless technologies such as cellular, low-power mesh networks, (Low-Power) WiFi, and Bluetooth (low-energy) can reasonably claim to fit the requirements of the IoT. Each, however, uses different trade-offs between reliability, energy consumption and throughput. The EVA team studies the limits of each technology, and will develop clear criteria to evaluate which technology is best suited to a particular set of constraints.
Coexistence between these different technologies (or different deployments of the same technology in a common radio space) is a valid point of concern.
The EVA team aims at studying such coexistence, and, where necessary, propose techniques to improve it. Where applicable, the techniques will be put forward for standardization. Multiple technologies can also function in a symbiotic way.
For example, to improve the quality of experience provided to end users, a wireless mesh network can transport sensor and actuator data in place of a cellular network, when and where cellular connectivity is poor.
The EVA team studies how and when different technologies can complement one another. A specific example of a collaborative approach is Cognitive Radio Sensor Networks (CRSN).
Reducing the energy consumption of low-power wireless devices remains a challenging task. The overall energy budget of a system can be reduced by using less power-hungry chips, and significant research is being done in that direction. That being said, power consumption is mostly influenced by the algorithms and protocols used in low-power wireless devices, since they influence the duty-cycle of the radio.
EVA will search for energy-efficient mechanisms in low-power wireless networks. One new requirement concerns the ability to predict energy consumption with a high degree of accuracy. Scheduled communication, such as the one used in the IEEE 802.15.4 TSCH (Time Slotted Channel Hopping) standard, and by IETF 6TiSCH, allows for a very accurate prediction of the energy consumption of a chip. Power conservation is a key issue in EVA.
To tackle this issue and match link-layer resources to application needs, EVA's 5-year research program dealing with Energy-Efficiency and Determinism centers around 3 studies:
Since sensor networks are very often built to monitor geographical areas, sensor deployment is a key issue. The deployment of the network must ensure full/partial, permanent/intermittent coverage and connectivity. This technical issue leads to geometrical problems which are unusual in the networking domain.
We can identify two scenarios. In the first one, sensors are deployed over a given area to guarantee full coverage and connectivity, while minimizing the number of sensor nodes. In the second one, a network is re-deployed to improve its performance, possibly by increasing the number of points of interest covered, and by ensuring connectivity. EVA investigates these two scenarios, as well as centralized and distributed approaches. The work starts with simple 2D models and is enriched to take into account more realistic environment: obstacles, walls, 3D, fading.
A large number of WSN applications mostly do data gathering (a.k.a “convergecast”). These applications usually require small delays for the data to reach the gateway node, requiring time consistency across gathered data. This time consistency is usually achieved by a short gathering period.
In many real WSN deployments, the channel used by the WSN usually encounters perturbations such as jamming, external interferences or noise caused by external sources (e.g. a polluting source such as a radar) or other coexisting wireless networks (e.g. WiFi, Bluetooth). Commercial sensor nodes can communicate on multiple frequencies as specified in the IEEE 802.15.4 standard. This reality has given birth to the multichannel communication paradigm in WSNs.
Multichannel WSNs significantly expand the capability of single-channel WSNs by allowing parallel transmissions, and avoiding congestion on channels or performance degradation caused by interfering devices.
In EVA, we focus on raw data convergecast in multichannel low-power wireless networks. In this context, we are interested in centralized/distributed algorithms that jointly optimize the channel and time slot assignment used in a data gathering frame. The limits in terms of reliability, latency and bandwidth will be evaluated. Adaptivity to additional traffic demands will be improved.
To adapt to varying conditions in the environment and application requirements, the EVA team investigate self-learning networks. Machine learning approaches, based on experts and forecasters, are investigated to predict the quality of the wireless links in a WSN. This allows the routing protocol to avoid using links exhibiting poor quality and to change the route before a link failure. Additional applications include where to place the aggregation function in data gathering. In a content delivery network (CDN), it is very useful to predict popularity, expressed by the number of requests per day, for a multimedia content. The most popular contents are cached near the end-users to maximize the hit ratio of end-users' requests. Thus the satisfaction degree of end-users is maximized and the network overhead is minimized.
Existing Internet threats might steal our digital information. Tomorrow's threats could disrupt power plants, home security systems, hospitals. The Internet of Things is bridging our digital security with personal safety. Popular magazines are full of stories of hacked devices (e.g. drone attack on Philips Hue), IoT botnets (e.g. Mirai), and inherent insecurity.
Why has the IoT industry failed to adopt the available computer security techniques and best practices?
Our experience from research, industry collaborations, and the standards bodies has shown that the main challenges are:
Our research goal is to contribute to a more secure IoT, by proposing technical solutions to these challenges for low-end IoT devices with immediate industrial applicability and transfer potential. We complement the existing techniques with the missing pieces to move towards truly usable and secure IoT systems.
Wireless networks have become ubiquitous and are an integral part of our daily lives. These networks are present in many application domains; the most important are detailed in this section.
Networks in industrial process automation typically perform monitoring and control tasks.
Wired industrial communication networks, such as HART 1, have been around for decades and, being wired, are highly reliable.
Network administrators tempted to “go wireless” expect the same reliability.
Reliable process automation networks – especially when used for control – often impose stringent latency requirements.
Deterministic wireless networks can be used in critical systems such as control loops, however, the unreliable nature of the wireless medium, coupled with their large scale and “ad-hoc” nature raise some of the most important challenges for low-power wireless research over the next 5-10 years.
Through the involvement of team members in standardization activities, protocols and techniques are proposed for the standardization process with a view to becoming the de-facto standard for wireless industrial process automation.
Besides producing top level research publications and standardization activities, EVA intends this activity to foster further collaborations with industrial partners.
Today, outdoor WSNs are used to monitor vast rural or semi-rural areas and may be used to detect fires. Another example is detecting fires in outdoor fuel depots, where the delivery of alarm messages to a monitoring station in an upper-bounded time is of prime importance. Other applications consist in monitoring the snow melting process in mountains, tracking the quality of water in cities, registering the height of water in pipes to foresee flooding, etc. These applications lead to a vast number of technical issues: deployment strategies to ensure suitable coverage and good network connectivity, energy efficiency, reliability and latency, etc.
We work on such applications through associate team ”SWARM” with the Pister team from UC Berkeley.
The general agreement is that the Internet of Things (IoT) is composed of small, often battery-powered objects which measure and interact with the physical world, and encompasses smart home applications, wearables, smart city and smart plant applications.
It is absolutely essential to (1) clearly understand the limits and capabilities of the IoT, and (2) develop technologies which enable user expectation to be met.
The EVA team is dedicated to understanding and contributing to the IoT. In particular, the team maintains a good understanding of the different technologies at play (Bluetooth, IEEE 802.15.4, WiFi, cellular), and their trade-offs. Through scientific publications and other contributions, EVA helps establish which technology best fits which application.
Through the HIPERCOM project, EVA has developed cutting-edge expertise in using wireless networks for military, energy and aerospace applications. Wireless networks are a key enabling technology in the application domains, as they allow physical processes to be instrumented (e.g. the structural health of an airplane) at a granularity not achievable by its wired counterpart. Using wireless technology in these domains does however raise many technical challenges, including end-to-end latency, energy-efficiency, reliability and Quality of Service (QoS). Mobility is often an additional constraint in energy and military applications. Achieving scalability is of paramount importance for tactical military networks, and, albeit to a lesser degree, for power plants. EVA works in this domain.
Smart cities share the constraint of mobility (both pedestrian and vehicular) with tactical military networks. Vehicular Ad-hoc NETworks (VANETs) will play an important role in the development of smarter cities.
The coexistence of different networks operating in the same radio spectrum can cause interference that should be avoided. Cognitive radio provides secondary users with the frequency channels that are temporarily unused (or unassigned) by primary users. Such opportunistic behavior can also be applied to urban wireless sensor networks. Smart cities raise the problem of transmitting, gathering, processing and storing big data. Another issue is to provide the right information at the place where it is most needed.
In an “emergency” application, heterogeneous nodes of a wireless network cooperate to recover from a disruptive event in a timely fashion, thereby possibly saving human lives. These wireless networks can be rapidly deployed and are useful to assess damage and take initial decisions. Their primary goal is to maintain connectivity with the humans or mobile robots (possibly in a hostile environment) in charge of network deployment. The deployment should ensure the coverage of particular points or areas of interest. The wireless network has to cope with pedestrian mobility and robot/vehicle mobility. The environment, initially unknown, is progressively discovered and may contain numerous obstacles that should be avoided. The nodes of the wireless network are usually battery-powered. Since they are placed by a robot or a human, their weight is very limited. The protocols supported by these nodes should be energy-efficient to maximize network lifetime. In such a challenging environment, sensor nodes should be replaced before their batteries are depleted. It is therefore important to be able to accurately determine the battery lifetime of these nodes, enabling predictive maintenance.
The EVA team distinguishes between opportunistic communication (which takes advantage of a favorable state) and collaborative communication (several entities collaborate to reach a common objective). Furthermore, determinism can be required to schedule medium access and node activity, and to predict energy consumption.
In the EVA project, we propose self-adaptive wireless networks whose evolution is based on:
The types of wireless networks encountered in the application domains can be classified in the following categories.
Standardization activities at the IETF have defined an “upper stack” allowing low-power mesh networks to be seamlessly integrated in the Internet (6LoWPAN), form multi-hop topologies (RPL), and interact with other devices like regular web servers (CoAP).
Major research challenges in sensor networks are mostly related to (predictable) power conservation and efficient multi-hop routing. Applications such as monitoring of mobile targets, and the generalization of smart phone devices and wearables, have introduced the need for WSN communication protocols to cope with node mobility and intermittent connectivity.
Extending WSN technology to new application spaces (e.g. security, sports, hostile environments) could also assist communication by seamless exchanges of information between individuals, between individuals and machines, or between machines, leading to the Internet of Things.
Wired sensor networks have been used for decades to automate
production processes in
industrial applications, through standards such as HART.
Because of the unreliable nature of the wireless medium, a wireless
version of such industrial
networks was long considered infeasible.
In 2012, the publication of the IEEE 802.15.4e standard triggered a revolutionary trend in low-power mesh networking: merging the performance of industrial networks, with the ease-of-integration of IP-enabled networks. This integration process is spearheaded by the IETF 6TiSCH working group, created in 2013. A 6TiSCH network implements the IEEE 802.15.4e TSCH protocol, as well as IETF standards such as 6LoWPAN, RPL and CoAP. A 6TiSCH network is synchronized, and a communication schedule orchestrates all communication in the network. Deployments of pre-6TiSCH networks have shown that they can achieve over 99.999% end-to-end reliability, and a decade of battery lifetime.
The communication schedule of a 6TiSCH network can be built and maintained using a centralized, distributed, or hybrid scheduling approach. While the mechanisms for managing that schedule are being standardized by the IETF, which scheduling approach to use, and the associated limits in terms of reliability, throughput and power consumption remain entirely open research questions. Contributing to answering these questions is an important research direction for the EVA team.
In contrast to routing, other domains in Mobile Ad-hoc NETworks (MANETs) such as medium access, multi-carrier transmission, quality of service, and quality of experience have received less attention. The establishment of research contracts for EVA in the field of MANETs is expected to remain substantial. MANETs will remain a key application domain for EVA with users such as the military, firefighters, emergency services and NGOs.
Vehicular Ad hoc Networks (VANETs) are arguably one of the most promising applications for MANETs. These networks primarily aim at improving road safety. Radio spectrum has been ring-fenced for VANETs worldwide, especially for safety applications. International standardization bodies are working on building efficient standards to govern vehicle-to-vehicle or vehicle-to-infrastructure communication.
We propose to initially focus this activity on spectrum sensing. For efficient spectrum sensing, the first step is to discover the links (sub-carriers) on which nodes may initiate communications. In Device-to-Device (D2D) networks, one difficulty is scalability.
For link sensing, we will study and design new random access schemes for D2D networks, starting from active signaling. This will assume the availability of a control channel devoted to D2D neighbor discovery. It is therefore naturally coupled with cognitive radio algorithms (allocating such resources): coordination of link discovery through eNode-B information exchanges can yield further spectrum usage optimization.
Low-power wireless networks are an “efficiency technology” in that they enable efficient environmental observations with the goal to reduce the overal footprint. The EVA team is working on several use cases of low-power wireless for environmental applications.
Burnmonitor.
The 2020 wildfire season in California was the largest in the state's modern history, with over 4 million acres burnt, claiming 31 lives, and costing the state over 12 billion dollars.
The BurnMonitor project brings together Inria and several partners to build a complete wildfire early detection solution.
BurnMonitor is an early wildfire detection IoT (Internet of Things) solution which combines sensors on the ground, data analysis tools and satellite imagery.
The fire department installs a fence of wireless sensors around the areas it wants to protect.
Fire-proof plastic enclosures contain the necessary sensors to detect fire, as well as the electronics to wirelessly communicate.
The wireless sensors form a highly reliable low-power wireless mesh network around a gateway device. The network’s >99.999% reliability is crucial to avoid missing a fire alarm because of connectivity issues.
The team conducted full proof-of-concept live fires to test the capability of the system to accurately detect a fire and generate meaningful data.
In a test, the first step is to start a controlled fire in a representative area.
For more information:
Falco.
The Falco product, created by the EVA team's spin-off Wattson Elements, is a low-power wireless sensor solution to monitor a marina and the boat it contains.
It uses a series of sensors to warn the marina in case a fire starts on a boat; if not stopped in time, such a fire can have devastating effects on the environment.
Falco also develops sensors which go in the electricity pedestals the boat plug into, to encourage boat owners to reduce their consumption.
Finally, Falco has launched a connected buoy, to make it easier for boaters to boot a mooring buoy, preventing them to anchoring and thereby destroying the ocean's floor.
These environmental actions have contributed in Falco receiving the “IoT Grand Public” award at the 2021 “Trophees de l'Embarque” national competition, which focses on environmental impact.
For more information:
The EVA team is very aware of how hard it is to hire someone who has experience in embedded, network and related field, in particular women. The team is therefore conducting a number of action to promote these fields to different unversities and schools.
Dust Academy.
Embedded system are the perfect teaching tool.
They offer infinite opportunities to let student “see for themselves”.
And adding connectivity to it (low-power wireless for example) allows the students to build very complex chains of information.
In the most complete case, information goes from a physical sensor to a micro-controller, through a low-power wireless mesh network, to a gateway, to a single-board computer, to a cloud-based back-end system, to a database, and to the student's browser.
Being able to build up this entire chain fast and with relatively simple components is both incredibly motivating for the students (“The dial is moving on my phone!”, “I can control my fan remotely!”), and offers the instructor infinite possibilities to dig into any topic, from SPI buses to RTOS priority inversion, embedded protocols or web interaction.
With that perspective, we have developed the “Dust Academy” series of courses which we have now tought over 20 times in universities around the world.
For more information:
DotBot.
Large, coordinated "swarms" of small, resource constrained robots have the potential to coordinate to complete complex tasks that single monolithic robots cannot.
However, while there is ongoing research, little progress has been made in successfully deploying these swarms in the real world.
To help further the field, we propose a research platform called the DotBot, a low-price, versatile laser cut robot that can inexpensively act as an agent in a swarm of robots.
Each DotBot has two small motors for mobility, accurate localization using laser lighthouses, and can communicate using off-the-shelf radios in either time-synchronized channel-hopping mesh networks originally designed for reliable transmission in crowded IoT networks, or with BLE so that the robots can be programmed from a cell phone or other Bluetooth-enabled device.
We see the DotBot platform as an ideal tool for introducing robotics and embedded programming in education.
We target 3 levels.
First, in primary school, DotBot serves as a basic introduction to robotics, using simple interaction and remote-control scenarios.
In high school, DotBot is used as an introduction to embedded programming, with a focus on the interaction with the real world.
Finally, in university, a DotBot swarm is used to introduce the concepts of distributed algorithms, task assignment as well as planning and scheduling.
For more information:
Micro-Robotics.
In 2021, the team has started working on micro-robotics, from different angles.
From a simulation point of view, we revamped out Atlas simulation platform to be able to simulate both the movement and communication of the robots.
From an experimental point of view, we designed the DotBot, a cheap and simple platform for conducting swarm research on.
LAKE Standardization.
The IETF LAKE working group has met 9 times in 2021 and released 7 iterations of the solution document.
The working group has declared the protocol as “ready for formal analysis” 17 in November 2021 and has consequently frozen the publishing of new versions of the document in the approximate timeframe until the IETF 113 meeting, when first results are expected to come in.
The team has participated in interop testing events with its two implementations (EDHOC-C and py-edhoc) and has continued aligning them with the specification.
Smart Dust.
The collaboration with Prof. Kris Pister's group at UC Berkeley has continued on the Single Chip Micro Mote project.
This year we have further demonstrated network integration of crystal-free wireless nodes, in particular, against temperature variation.
This will pave the way for millimeter-scale wireless nodes in the future.
Falco.
The Falco startup (Homepage), which was launched in 2019, has been developing fast.
After winning the Innovation Competition of the 2019 Paris Nautic Show,
Falco is recipient of the i-Lab award, the largest Innovation competition for startup companies in France, in July 2020, and
the IoT Award of the Embedded Trophy (Trophees de l’Embarque, catégorie “IoT Grand Public”) in 2022.
The Falco solution is now deployed in 15 marinas;
the Falco team is now 15 people.
This section lists the software and platforms developed by the team.
There are three piece to the Argus:
The Argus Probe is the program which attaches to your low-power wireless sniffer and forwards its traffic to the Argus Broker.
The Argus Broker sits somewhere in the cloud. Based on MQTT, it connect Argus Probes with Argus Clients based on a pub-sub architecture.
Several Argus Clients can the started at the same time. It is a program which subscribes to the Argus Broker and displays the frames in Wireshark.
The team is now fully equipped with all the equipment necessary to assemble a small batch of prototyping Printed Circuit Boards and build small objects, including robots. This has proven to be a fantastically efficient setup to speed up our research, as prototyping isn't a hurdle anymore. Take a looking at this YouTube video for an overview of our setup.
A lot of the work in the team revolves around small robotics. Large, coordinated “swarms” of small, resource constrained robots have the potential to coordinate to complete complex tasks that single monolithic robots cannot. However, while there is ongoing research, little progress has been made in successfully deploying these swarms in the real world. To help further the field, we propose a research platform called the DotBot, a low-price, versatile laser cut robot that can inexpensively act as an agent in a swarm of robots. Each DotBot has two small motors for mobility, accurate localization using laser lighthouses, and can communicate using off-the-shelf radios in either time-synchronized channel-hopping mesh networks originally designed for reliable transmission in crowded IoT networks, or with BLE so that the robots can be programmed from a cell phone or other Bluetooth-enabled device. We see the DotBot platform as an ideal tool for introducing robotics and embedded programming in education. We target 3 levels. First, in primary school, DotBot serves as a basic introduction to robotics, using simple interaction and remote-control scenarios. In high school, DotBot is used as an introduction to embedded programming, with a focus on the interaction with the real world. Finally, in university, a DotBot swarm is used to introduce the concepts of distributed algorithms, task assignment as well as planning and scheduling.
For more information:
DotBot is the physical robot. To be able to explore efficient algorithms and networking approaches to coordinate a large number of robots, we have been developing the Atlas simulator, a fully Python-based simulation platform. It simulated both the robots (especially their movement and control) and the communication. With Atlas, we explore the impact communication (in particular latency and reliability) on the effectiveness of swarm orchestration.
Progress on automatic compensation of crystal-free motes against variation in temperature was performed in a significant collaboration with UC Berkeley on the Single Chip Micro Mote.
In our paper 27, temperature compensation was demonstrated in a slow temperature ramp. In addition, the chip was used as an IEEE 802.15.4 to BLE translator demonstrating its capabilities as a configurable multi-protocol device. In our paper 7, a network-based approach was used to keep channel in time during a much faster temperature transient generated with a hairdryer.
The IETF LAKE working group, formed in late 2019, standardizes a lightweight authenticated key exchange protocol for IoT use cases.
The group is co-chaired by Mališa Vučinićof Inria-EVA.
Our results published in 2021 on the performance of TLS and DTLS protocols in 6TiSCH networks 8 show performance penalties for the network if the security protocol is not carefully selected and tuned. For example, when using an unreliable communication link in our settings, the DTLS handshake duration suffers a performance penalty of roughly 45%, while TLS' handshake duration degrades by 15%.
The 6TiSCH working group has concluded its standardization work by publishing RFC9033 18. RFC9033 describes the minimal framework required for a new device, called “pledge”, to securely join a 6TiSCH (IPv6 over the TSCH mode of IEEE 802.15.4e) network. RFC9033 defines the Constrained Join Protocol and its CBOR (Concise Binary Object Representation) data structures, and describes how to configure the rest of the 6TiSCH communication stack for this join process to occur in a secure manner.
We worked on several optimization for the 6TiSCH protocol stack.
Although network formation time is one of key performance indicators of wireless sensor networks, it has not been studied well with 6TiSCH standard protocols such as MSF (6TiSCH Minimal Scheduling Function) and CoJP (Constrained Join Protocol). We therefore proposed a scheduling function called SF-Fastboot 26 which shortens network formation time of 6TiSCH. We evaluate SFFastboot by simulation comparing with MSF, the state-of-the-art scheduling function. The simulation shows SF-Fastboot reduces network formation time by 41–80%.
Although there are several proposed TSCH scheduling solutions in the literature, most of them are not directly applicable to 6TiSCH for real-world deployments because they fail to take into consideration the dynamics of a network. We therefore proposed a full-featured 6TiSCH scheduling function called YSF 16, that autonomously takes into account all aspects of network dynamics, including network formation phase and parent switching. YSF aims at minimizing latency and maximizing reliability for data gathering applications. We evaluate YSF by simulation, and compare it to MSF, the state-of-art scheduling function being standardized by the IETF 6TiSCH working group.
Throught the PhD of Mina Rady, we explored extensions of the 6TiSCH protocol stack to support multiple physical layers.
We started by publishing a research report 30 which introduces early results from an experiment to integrate multiple radios in the same 6TiSCH network. It provides an initial step towards the publication of an article. The work discusses the architecture of the proposed solution, and presents its performance compared to single-PHY networks.
We then introduced g6TiSCH 14, a generalization of the standards-based IETF 6TiSCH protocol stack. g6TiSCH allows nodes equipped with multiple radios to dynamically switch between them on a link-by-link basis, as a function of link-quality. This approach results in a dynamic trade-off between latency and power consumption. We evaluated the performance of the approach experimentally on an indoor office testbed of 36 OpenMote B boards.
Finally, we introduced 6DYN 13, an extension to the IETF 6TiSCH standards-based protocol stack. In a 6DYN network, nodes switch physical layer dynamically on a link-by-link basis, in order to exploit the diversity offered by this new technology agility. To offer low latency and high network capacity, 6DYN uses heterogeneous slot durations: the length of a slot in the 6TiSCH schedule depends on the physical layer used.
We envision swarms of mm-scale micro-robots to be able to carry out critical missions such as exploration and mapping for hazard detection and search and rescue. These missions share the need to reach full coverage of the explorable space and build a complete map of the environment. To minimize completion time, robots in the swarm must be able to exchange information about the environment with each other. However, communication between swarm members is often assumed to be perfect, an assumption that does not reflect real-world conditions, where impairments can affect the Packet Delivery Ratio (PDR) of the wireless links.
In a first paper 20, we studied how communication impairments can have a drastic impact on the performance of a robotic swarm. We presented Atlas 2.0, an exploration algorithm that natively takes packet loss into account. We simulated the effect of various PDRs on robotic swarm exploration and mapping in three different scenarios. Our results show that the time it takes to complete the mapping mission increases significantly as the PDR decreases: on average, halving the PDR triples the time it takes to complete mapping.
In a second paper 21, we studied how communication impairments can have a drastic impact on the performance of robotic swarms in critical missions such as exploration. We used an improved version of the Atlas algorithm to simulate the effect of various PDRs on the exploration mission execution performance, with the key indicator being mapping completion time. Our results show that the time it takes to complete area exploration increases exponentially as the PDR decreases linearly. Based on our results, we emphasise the importance of considering methods that minimize the delay caused by lossy communication when designing and implementing algorithms for robotic swarm exploration.
Existing methods for wireless-sensor network (WSN) topology optimization employ simplifying assumptions of a fixed communication radius between network nodes, which is ill-suited for IoT networks deployed in complex terrain. We therefore proposed a data-driven approach to WSN topology optimization 12, employing a Bayesian link classifier trained on LIDAR-derived terrain characteristics and an in-situ survey of link quality. The classifier is trained to predict where good network links (packetdelivery ratio, PDR>0.5) are likely to form in a region given complex terrain attributes. Then, given numerous candidate wireless node placements throughout the domain, the classifier is used to construct an undirected weighted graph of the potential connectivity across the domain. Edge weights in the connectivity graph are proportional to the probability of forming a good link between the nodes. A novel modified cycle-union (MCyU) algorithm for generating a 2-vertex-connected, Steiner minimal network is then applied to the undirected weighted graph of potential network element placements. This ensures a survivable network design, while maximizing the probability of good links within the final network. The total number and spatial distribution of network elements produced by the algorithm is compared to an existing WSN, deployed for environmental monitoring in remote regions. In addition, the MCyU algorithm has been evaluated in three graph test cases to compare with state-of-the-art solutions, where MCyU outperforms in terms of weight minimization and computation time.
The wireless TSCH (Time Slotted Channel Hopping) network specified in the amendment of the IEEE 802.15.4 standard has many appealing properties. Its schedule of multichannel slotted data transmissions ensures the absence of collisions. Because there is no retransmission due to collisions, communication is faster. Since the devices save energy each time they do not take part in a transmission, the power autonomy of nodes is prolonged. Furthermore, channel hopping mitigates multipath fading and interferences.
All communication in a TSCH network is orchestrated by the communication schedule it is using. The scheduling algorithm used hence drives the latency and capacity of the network, and the power consumption of the nodes. To increase the flexibility and the self-organizing capacities required by IoT, the networks have to be able to adapt to changes. These changes may concern the application itself, the network topology by adding or removing devices, the traffic generated by increasing or decreasing the device sampling frequency, for instance. That is why flexibility of the schedule ruling all network communications is needed. We have designed a number of scheduling algorithms for TSCH networks, answering different needs. For instance, the centralized Load-based scheduler that assigns cells per flow, starting with the flow originating from the most loaded node has proved optimal for many configurations. Simulations with the 6TiSCH simulator showed that it gets latencies close to the optimal. They also highlighted that end-to-end latencies are positively impacted by message prioritization (i.e. each node transmits the oldest message first) at high loads, and negatively impacted by unreliable links, as presented at GlobeCom 2019
Among the distributed scheduling algorithms proposed in the literature, many rely on assumptions that may be violated by real deployments. This violation usually leads to conflicting transmissions of application data, decreasing the reliability and increasing the latency of data delivery. Others require a processing complexity that cannot be provided by sensor nodes of limited capabilities. Still others are unable to adapt quickly to traffic or topology changes, or are valid only for small traffic loads. We have designed MSF and YSF, two distributed scheduling algorithms that are adaptive and compliant with the standardized protocols used in the 6TiSCH working group at IETF. The Minimal Scheduling Function (MSF) is a distributed scheduling algorithm in which neighbor nodes locally negotiate adding and removing cells. MSF was evaluated by simulation and experimentation, before becoming the default scheduling algorithm of the IETF 6TiSCH working group, and now an official standard. We also designed LLSF, a scheduling algorithm focused on low latency communication. We proposed a full-featured 6TiSCH scheduling function called YSF, that autonomously takes into account all the aspects of network dynamics, including the network formation phase and parent switches. YSF aims at minimizing latency and maximizing reliability for data gathering applications. Simulation results obtained with the 6TiSCH simulator show that YSF yields lower end-to-end latency and higher end-to-end reliability than MSF, regardless of the network topology. Unlike other top-down scheduling functions, YSF does not rely on any assumption regarding network topology or traffic load, and is therefore more robust in real network deployments. An intensive simulation campaign made with the 6TiSCH simulator has provided comparative performance results. Our proposal outperforms MSF, the 6TiSCH Minimal Scheduling Function, in terms of end-to-end latency and end-to-end packet delivery ratio.
Furthermore we published additional research on computing the upper bounds on the end-to-end latency, finding the best trade-off between latency and network lifetime.
Enabling Named Data Networking (NDN) in real world Internet of Things (IoT) deployments becomes essential to benefit from Information Centric Networking (ICN) features in current IoT systems. One objective of the model is to show that caching can attenuate the number of transmissions generated by broadcast to achieve a reasonable overhead while keeping the data dissemination power of NDN. To design realistic NDN-based communication solutions for IoT, revisiting mainstream technologies such as low-power wireless standards may be the key. We explore the NDN forwarding over IEEE 802.15.4 by modeling a broadcast-based forwarding 6. Based on the observations, we adapt the Carrier-Sense Multiple Access (CSMA) algorithm of 802.15.4 to improve NDN wireless forwarding while reducing broadcast effects in terms of packet redundancy, round-trip time and energy consumption. As future work, we aim to explore more complex CSMA adaptations for lightweight forwarding to make the most of NDN and design a general-purpose Named-Data CSMA.
The Internet of Things (IoT) connects tiny electronic devices able to measure a physical value (temperature, humidity, etc.) and/or to actuate on the physical world (pump, valve, etc). Due to their cost and ease of deployment, battery-powered wireless IoT networks are rapidly being adopted.
The promise of wireless communication is to offer wire-like connectivity. Major improvements have been made in that direction, but many challenges remain as industrial applications have strong operational requirements. This section of the IoT application is called Industrial IoT (IIoT).
By the year 2020, it is expected that the number of connected objects will exceed several billion devices. These objects will be present in everyday life for a smarter home and city as well as in future smart factories that will revolutionize the industry organization. This is actually the expected fourth industrial revolution, better known as Industry 4.0. In which, the Internet of Things (IoT) is considered as a key enabler for this major transformation. The IoT will allow more intelligent monitoring and self-organizing capabilities than traditional factories. As a consequence, the production process will be more efficient and flexible with products of higher quality.
To produce better quality products and improve monitoring in Industry 4.0, strong requirements in terms of latency, robustness and power autonomy have to be met by the networks supporting the Industry 4.0 applications.
The main IIoT requirement is reliability. Every bit of information that is transmitted in the network must not be lost. Current off-the-shelf solutions offer over 99.999% reliability.
To provide the end-to-end reliability targeted by industrial applications, we investigate an approach based on message retransmissions (on the same path). We propose two methods to compute the maximum number of transmissions per message and per link required to achieve the targeted end-to-end reliability. The MFair method is very easy to compute and provides the same reliability over each link composing the path, by means of different maximum numbers of transmissions, whereas the MOpt method minimizes the total number of transmissions necessary for a message to reach the sink. MOpt provides a better reliability and a longer lifetime than MFair, which provides a shorter average end-to-end latency. This study was published in the Sensors journal in 2019.
The goal is to construct some next-generation access protocols, for the IoT (or alternately for vehicular networks). One starting point are methods from the family of Non-Orthogonal Multiple Access (NOMA), where multiple transmissions can "collide" but can still be recovered - with sophisticated multiple access protocols (MAC) that take the physical layer/channel into account. One such example is the family of the Coded Slotted Aloha methods. Another direction is represented by some vehicular communications where vehicles communicate directly with each other without necessarily going through the infrastructure. This is also true more generally in any wireless network where the control is relaxed (such as in unlicensed IoT networks like LoRa). One observation is that in such distributed scenarios, explicit or implicit forms of signaling (with sensing, messaging, etc.), can be used for designing sophisticated protocols - including using machine learning techniques.
Many technological enhancements are being developed world wide to enable the "Internet of Things" (IoT). IoT networks reliability and low latency. To mend that shortcoming, it is paramount to adapt existing random access methods for the IoT setting. In this work we shed light on one of the modern candidates for random access protocols fitted for IoT: the "Irregular Repetition Slotted ALOHA" (IRSA). As self-managing solutions are needed to overcome the challenges of IoT, we study the IRSA random access scheme in a distributed setting where groups of users, with fixed traffic loads, are competing for ALOHA-type channel access. To that aim, we adopt a distributed game-theoretic approach where two classes of IoT devices learn autonomously their optimal IRSA protocol parameters to optimize selfishly their own effective throughput. Through extensive simulations, we assess the notable efficiency of the game based distributed approach. We also show that our IRSA game attains the Nash equilibrium (NE) via the "better reply" strategy and we quantify the price of anarchy in comparison with a centralized approach. Our results imply that user competition does not fundamentally impact the performance of the IRSA protocol.
We have studied one of the modern random access protocols, Irregular Repetition Slotted Aloha (IRSA). We have addressed the IRSA access scheme in a distributed fashion where users are grouped in competing classes, with users of the same class sharing the same degree distribution. The distributed approach is modeled as a non-cooperative game where the classes autonomously and selfishly set their degree probabilities to improve their own throughput. We gave proof for the existence of the Nash Equilibria and how to attain them. This proof is based on the Debreu-Fan-Glicksberg theorem. We provided extensive numerical results that assess the notable improvement brought by the devised approaches and the small discrepancy of the distributed game-based approach in comparison with a centralized class-based IRSA approach.
Cache Pollution Attacks in the NDN Architecture: Impact and Analysis
Content caching is an essential component in NDN: content is cached in routers and used for future requests in order to reduce bandwidth consumption and improve data delivery speed. Moreover, NDN introduces new self-certifying contents features that obviously improve data security and make NDN a secured-by-design architecture able to support an efficient and secure content distribution at a global scale. However, basic NDN security mechanisms, such as signatures and encryption, are not sufficient to ensure security in these networks. Indeed, the availability of the Data in several caches in the network allows malicious nodes to perform attacks that are relatively easy to implement and very effective. Such attacks include Cache Pollution Attacks (CPA), Cache Privacy Attacks, Content Poisoning Attacks and Interest Flooding Attacks. In this study 23 we have identified the different attack models that can disrupt the NDN operation. We have conducted several simulations on NDNSim to assess the impact of the Cache Pollution Attack on the performance of a Named Data Network. More precisely, we implemented different attack scenarios and analyzed their impact in terms of cache hit ratio, data retrieval delay and hit damage ratio.
We have studied CPA impact on NDN through ndnSim simulations. Using different scenarios in simple as well as complex and realistic topologies, we have shown the impact of a CPA on the caching efficiency. More specifically, our results reveal that a CPA decreases the Cache Hit Ratio (CHR) to almost 0% in several scenarios and increases the Average Retrieval Delay (ARD) to around 20% compared to its normal state, while the Hit Damage Ratio (HDR) reaches 0.6, which confirms the highly negative impact of this form of attack. In future work, we intend to exploit the results of this investigation to design a solution for detecting and mitigating the Cache Pollution Attack. In particular, we will develop an intelligent mechanism that computes the illegibility of each cached data packet and uses this parameter to improve the caching policy.
Evaluation of a new Radio Technology and Visible Light Communication for a Platooning Application
The autonomous platoon is today one of the key tools for better road utilization. In fact, by optimizing the distance between vehicles, the air drag is reduced, and researchers have shown that 20From a network point of view, reducing the distance between vehicles will allow a new point-to-point communication link between the vehicles in front and behind by using Vehicular Visible Light Communication (V-VLC), thus providing an opportunity to have a hybrid communication. Our new radio design based on the AS-DTMAC protocol guarantees a high Quality of Service for real-time applications. However, with a very high density, we can reach the bandwidth dedicated to V2X radio. In the case of a platoon, this scenario can cause dangerous platoon instability. Assisting the radio with another communication vector such as V-VLC can help to maintain the high level of reliability that is necessary for the control of a platoon. We first carry out an analytical analysis to investigate the capacity of our new radio technology to support the platoon control use case in terms of the quality of service (QoS) required for this type of application. Secondly, we show through extensive simulations the current level of VVLC technology, compared to radio technology, in terms of packet loss and delay
In this work 22, we have showed the ability of our radio technology, based on AS-DTMAC, to respond to the QoS requirements of the platooning application. We have conducted largescale platoon simulations based on the Veins-vlc framework, which uses a realistic V-VLC model. We also have presented the state-of-the-art of V-VLC and showed the lack of maturity of this technology compared to RF. However, V-VLC is still an excellent assistant technology in the platoon use case. By highlighting the limitations of this technology, we will challenge them in the future in order to achieve the full capacity of this technology in terms of data rate, latency and inter vehicular distance. Finally, we have compared the performance of RF and V-VLC in the platooning scenario in terms of PDR and delay. In future work, we will focus on integrating RF and VLC communication in a heterogeneous network in order to keep the reliability as high as possible. We plan to propose a smart switching protocol at the handover level to choose the best communication technology depending on the mobility scenario. This protocol will use a dynamic threshold and make decisions based on a vehicular realtime system such as the Channel Busy Ratio (CBR), which gives information about the radio channel quality, but also on V2X by exploiting the information obtained from CAMs to estimate the network load. Thus, the protocol will be able to propose a redundant mode based on the use of RF and V-VLC together in the case when the network load is low.
September 29 2021, Fouzi Boukhalfa defended his PhD: "Low Latency Radio and Visible Light Communication For Autonomous Driving" at Inria - Paris, 28.
Impact Analysis of Greedy Behavior Attacks in Vehicular Ad hoc Networks
Vehicular Ad hoc Networks (VANETs), while promising new approaches to improving road safety, must be protected
from a variety of threats. Greedy behavior attacks at the level of the Medium Access (MAC) Layer can have
devastating effects on the performance of a VANET.
This kind of attack has been extensively studied in contention-based MAC protocols. Hence, in this work,
we focus on studying the impact of such an attack on a contention-free MAC protocol called Distributed
TDMA-based MAC Protocol DTMAC. We identify new vulnerabilities related to the MAC slot scheduling
process that can affect the slot reservation process on the DTMAC protocol and we use simulations to
evaluate their impact on network performance. Exploitation of these vulnerabilities would result in a
severe waste of channel capacity where up to a third of the free slots could not be reserved in the
presence of an attacker. Moreover, multiple attackers could cripple the channel and none could acquire
a time slot.
In this work 24 , we focus on the greedy behavior attack on the DTMAC protocol. Based on the characteristics of such an attack and the protocol itself, we identify undocumented greedy behavior that can disrupt the slot reservation process in DTMAC and then evaluate its impact by means of simulation. The slot scheduling vulnerability was exploited through two newly identified attacks: the neighbor reservation cancellation attack and the multi-access attack. The former was tested under two scenarios: the first with a single attacker in the network, while varying the percentage of affected neighbors, and the second with multiple attackers in the network. The results reveal that when the number of attacked neighbors increases, about 30which means that a third of the channel capacity is wasted. The multiple attacker scenario shows that the network can be paralyzed and no vehicle can acquire a free slot. 8 is the average number of attackers that would need to be present to successfully carry out this task. The multi-access attack reveals that 50% of the free slots are wasted and unreserved if a greedy attacker forces its ID into all the free slots in its neighborhood. Another metric, the access collision ratio, is evaluated in this scenario, showing how the number of collisions increases in the presence of an attacker. In future work, we will exploit the results of this investigation to develop a solution for detecting and preventing greedy behavior attacks that threaten the DTMAC protocol, focusing mainly on the new attacks identified at the MAC level.
An Efficient Cross-Layer Design for Multi-hop Broadcast
of Emergency Warning Messages in Vehicular Networks
The main objective of Vehicular ad hoc networks (VANETs) is to make road transportation systems more intelligent in order to anticipate and avoid dangerous, potentially life-threatening situations. Due to its promising safety applications, this type of network has attracted a lot of attention in the research community. The dissemination of warning messages, such as DENMs (Decentralized Environmental Notification Messages), requires an efficient and robust routing protocol. In previous studies, the active signaling mechanism has shown its ability to prevent collisions between users trying to allocate the same resource. In this work 25, we propose an original message forwarding strategy based on the active signaling mechanism. Our proposal disseminates warning messages from a source vehicle to the rest of the network while minimizing the access delay and the number of relay nodes. For this purpose, a special time slot is dedicated to forwarding emergency warning messages. To avoid access collisions on this slot, the active signaling scheme we propose favours the selection of the furthest node as the forwarder. We carry out a number of simulations and comparisons to evaluate the performances of the scheme.
We have proposed to enhance DTMAC protocol by integrating active signaling. The simulation results show that AS-DTMAC drastically reduces the access collision rate and allocates slots to all the vehicles in the network in half the time it takes DTMAC to do so. We also presented a use case in the V2V for urgent and high priority traffic message like DENM, that can help to avoid an accident, all these new features are very important for the future technology described in the beginning of this paper. As future work, we will do additional simulations to compare with the standard used in V2V (IEEE 802.11p) and we plan to develop an analytical model for AS-DTMAC as well as to investigate further advanced access features that could be provided using the active signaling scheme.
A game theory-based route planning approach for
automated vehicle collection Multihop routing in VANETs
We consider a shared transportation system in an urban environment where human drivers collect vehicles that are no longer being used. Each driver, also called a platoon leader, is in charge of driving collected vehicles as a platoon to bring them back to some given location (e.g. an airport, a railway station). Platoon allocation and route planning for picking up and returning automated vehicles is one of the major issues of shared transportation systems that need to be addressed. In this paper, we propose a coalition game approach to compute 1) the allocation of unused vehicles to a minimal number of platoons, 2) the optimized tour of each platoon and 3) the minimum energy consumed to collect all these vehicles. In this coalition game, the players are the parked vehicles, and the coalitions are the platoons that are formed. This game, where each player joins the coalition that maximizes its payoff, converges to a stable solution. The quality of the solution obtained is evaluated with regard to three optimization criteria and its complexity is measured by the computation time required. Simulation experiments are carried out in various configurations. They show that this approach is very efficient to solve the multi-objective optimization problem considered, since it provides the optimal number of platoons in less than a second for 300 vehicles to be collected, and considerably outperforms other well-known optimization approaches like MOPSO (Multi-Objective Particle Swarm Optimization) and NSGA-II (Non dominated Sorting Genetic Algorithm).
We have defined the PROPAV problem as a coalition game, where the players are the electric automated vehicles to be picked up and returned to the rental station 10 . The three optimization criteria are 1) the number of platoons, which is minimized, 2) the tour duration of each platoon leader, which is minimized and 3) the total energy consumed is also minimized. Multiple constraints are taken into account such as the maximum number of vehicles per platoon, and the residual energy of each vehicle in the platoon. The coalition game very quickly converges to a set of coalitions, where each coalition is a platoon driven by a platoon leader. Simulation results obtained for various configurations where the number of vehicles to pick up ranges from 10 to 300 show that game theory always provides the best quality solution in terms of the three optimization criteria. The coalition game always provides the optimal number of platoons, which results in 20fewer platoons than MOPSO and 13of 300 vehicles to collect. Furthermore, the complexity of the coalition game evaluated by both the computation time and the number of switches is much smaller than that of MOPSO and NSGA-II. The computation time remains below 1 s for all tested cases, whereas the other methods require several minutes. This difference is crucial in an operational setting. To build upon this study, three further aspects could be taken into consideration in future work. First, we can extend the optimization approach based on this coalition game so as to take into account multiple rental stations. One strategy would be to assign an additional unknown to each unused vehicle, that is 21 the rental station to which it should be brought. Alternatively, only the number of vehicles to return to each rental station can be specified. This second solution is probably the most suited for a carsharing system. However, it would only make sense when coupled with an algorithm that decides what stations should be refilled to match demand. Second, additional objectives or constraints can be considered in the PROPAV problem to better reflect real-world conditions. For example, signalized intersections can cause platoon dispersion and the separation of some vehicles from the platoon to which they belong. Hence, the number of road crossings in the platoons’ tour should be kept as low as possible. Finally, in the long term, all the problems that are well suited to this coalition game should be characterized.
Exploring the forecasting approach for road accidents
The study of 2020 on Exploring the forecasting approach for road accidents: Analytical measures with hybrid machine learning has been published in 15
Failure of the most important equipment disrupts production and leads to financial losses. The downtime risk of unplanned equipment can be reduced by adjusting the assumptions of revenue-generating assets to ensure the efficiency and safety of the equipment. However, the increase in equipment resources creates a flood of data, and the existing prediction model based on machine learning alone does not fit the predictions of the state of machinery in a timely manner. In this work, an in-depth study of Fededated Reinforcing Learning is proposed for preparing predictive devices from the context of a machine tool network. Within each device, a sensor device collects raw sensor data, and the health status of the devices is analyzed for undesirable events. Unlike traditional black box models, the proposed algorithm reads self-sufficient nutrition policy using the Agent’s integrated learning approach and provides practical recommendations for each item. This study also looks at the safety aspect of the Predictive Maintenance Model and introduces the concept of Blockchain Network. Our experimental results indicate that there may be a wide range of machine storage applications such as a secure and automated learning framework.
As industries around the world move towards Industry 4.0's vision to increase productivity, modern equipment is becoming more sophisticated to make maintenance of it. The result is an increasing customer demand for accurate, interpretable and portable information from speculative editing tools. In this work, we have introduced a way to give practical recommendations, depending on the state of the medical equipment. We have made the expansion of the downtime of equipment as a function of many sensory input sensors and simulated health equipment taken by state couples. Countries should be identified with minimal configuration and it is challenging to address them in the existing ways. After that, we have suggested that a blockchain based on the Deep Federated Reinforcing Learning algorithm could quickly learn the right decision-making policy, in just a few steps. The test results has shown consistent nutrition recommendations for all the same equipment, except for a different initial health condition. Future work may include extending the current job to another data failure and benchmark compared to the actual equipment policy schedule.