2024Activity reportProject-TeamASTRA
RNSR: 202224314M- Research center Inria Paris Centre
- In partnership with:Valeo
- Team name: Automated and Safe TRAnsportation systems
- Domain:Perception, Cognition and Interaction
- Theme:Robotics and Smart environments
Keywords
Computer Science and Digital Science
- A1.5. Complex systems
- A1.5.1. Systems of systems
- A1.5.2. Communicating systems
- A2.3. Embedded and cyber-physical systems
- A3.4. Machine learning and statistics
- A3.4.1. Supervised learning
- A3.4.2. Unsupervised learning
- A3.4.3. Reinforcement learning
- A3.4.5. Bayesian methods
- A3.4.6. Neural networks
- A3.4.8. Deep learning
- A5.3. Image processing and analysis
- A5.3.3. Pattern recognition
- A5.3.4. Registration
- A5.4. Computer vision
- A5.4.1. Object recognition
- A5.4.2. Activity recognition
- A5.4.4. 3D and spatio-temporal reconstruction
- A5.4.5. Object tracking and motion analysis
- A5.4.6. Object localization
- A5.5.1. Geometrical modeling
- A5.9. Signal processing
- A5.10. Robotics
- A5.10.2. Perception
- A5.10.3. Planning
- A5.10.4. Robot control
- A5.10.5. Robot interaction (with the environment, humans, other robots)
- A5.10.6. Swarm robotics
- A5.10.7. Learning
- A6. Modeling, simulation and control
- A6.1. Methods in mathematical modeling
- A6.2.3. Probabilistic methods
- A6.2.6. Optimization
- A6.4.1. Deterministic control
- A6.4.3. Observability and Controlability
- A6.4.4. Stability and Stabilization
- A6.4.5. Control of distributed parameter systems
- A8.6. Information theory
- A8.9. Performance evaluation
- A9.2. Machine learning
- A9.3. Signal analysis
- A9.5. Robotics
- A9.6. Decision support
- A9.7. AI algorithmics
Other Research Topics and Application Domains
- B5.2.1. Road vehicles
- B5.6. Robotic systems
- B6.6. Embedded systems
- B7.1.2. Road traffic
- B7.2. Smart travel
- B7.2.1. Smart vehicles
- B7.2.2. Smart road
- B9.5.6. Data science
1 Team members, visitors, external collaborators
Research Scientists
- Fawzi Nashashibi [Team leader, INRIA, Senior Researcher, HDR]
- Zayed Alsayed [VALEO, Industrial member]
- Hussam Atoui [VALEO, Industrial member, from Feb 2024]
- Alexandre Boulc'H [VALEO, Industrial member]
- Andrei Bursuc [VALEO, Industrial member]
- Guy Fayolle [INRIA, Emeritus]
- Fernando Garrido [VALEO, Industrial member]
- Jean-Marc Lasgouttes [INRIA, Researcher]
- Gerard Le Lann [INRIA, Emeritus]
- Renaud Marlet [VALEO, Industrial member]
- Gilles Puy [VALEO, Industrial member]
- Tiago Rocha Goncalves [VALEO, Industrial member, from Feb 2024]
- Tuan Hung Vu [VALEO, Industrial member]
- Raoul de Charette [INRIA, Senior Researcher, HDR]
Post-Doctoral Fellow
- Nelson De Moura [INRIA]
PhD Students
- Mohammed-Yasser Benigmim [INRIA, from Oct 2024, PhD visit (6 months)]
- Anh Quan Cao [INRIA]
- Karim Essalmi [VALEO, CIFRE]
- Mohammad Fahes [INRIA]
- Amina Ghoul [INRIA]
- Islem Kobbi [INRIA, from Oct 2024]
- Ivan Lopes [INRIA]
- Elias Maharmeh [VALEO, CIFRE]
- Tetiana Martyniuk [VALEO, CIFRE]
- Noel Nadal [INRIA]
- Jiahao Zhang [IRT System X, CIFRE, until Sep 2024]
Technical Staff
- Emmanuel Doucet [VALEO, Engineer, until Jan 2024]
- Axel Jeanne [VALEO, Engineer]
- Paulo Resende [VALEO, Engineer]
- Paul Roger-Dauvergne [INRIA, Engineer, from Nov 2024]
Interns and Apprentices
- Matteo Marengo [VALEO, Intern, from Apr 2024 until Oct 2024]
- Soumava Paul [INRIA, Intern, from Jun 2024]
- Abdelrahmaine Touami [INRIA, Intern, from May 2024 until Oct 2024]
- Fatoumata Wadiou [INRIA, Intern, from May 2024 until Sep 2024]
- Clément Weinreich [INRIA, Intern, from Apr 2024 until Sep 2024]
Administrative Assistants
- Martial Le Henaff [INRIA, from May 2024]
- Anne Mathurin [INRIA, until Apr 2024]
External Collaborator
- Itheri Yahiaoui [Université de Reims Champagne-Ardenne]
2 Overall objectives
Context
SAE International1 recently unveiled a new visual chart 91 that is designed to define the six levels of driving automation, from SAE Level 0 (no automation) to SAE Level 5 (full vehicle autonomy). It serves as the industry's most-cited reference for automated-vehicle (AV) capabilities.
Fully autonomous cars (Level 5 of automation according to SAE J3016), which can work everywhere in all conditions, are not yet on the roads. Nevertheless, major advances are making vehicle automation a reality. Systems exist on serial vehicles with Level 2/2+ (assisted driving) and even Level 3 (high automation, driving only upon system request) since 2021 on privately owned vehicles as well as on public transport driverless vehicles are offered to passengers and goods around the world. Recent demonstrators (automated shuttles and robotaxis) have the merit of proving the feasibility of automated driving as a solution for improving mobility, comfort, safety and energy efficiency.
Current regulation (UN 157 – adopted in June 2020 and voted by 60 countries) allows today vehicles to drive in L3 up to 60 km/h on carriageway roads. Original Equipment Manufacturers (OEMs) are pushing for the extension of this regulation up to 130 km/h including automated lane changes. To allow that (L3/L4 on the highway), many challenges are still to be taken up; technical challenges of course, but also non-technical challenges which are not the easiest to deal with (legal, liability, ethical, monopoly, acceptance, economical...) and that are not in the scope of this document even though some intersect with some technical considerations 80, 99, 123.
In this context, the official ambition of France was previously recalled by the President of France, who reaffirmed his willingness to deploy these solutions, to extend transport services based on the autonomous vehicle by 2021 whenever this is possible.
For public transportation, on-road experiments are conducted around the world in specific Operational Design Domains (ODDs) and first commercial services are being deployed. For example, in Russia, Yandex has launched the first commercial service in Europe in 2019 in the city of Innopolis and Waymo. One ride-hailing service using highly automated vehicles in the Phoenix metropolitan area (US). These systems are operating in geofenced controlled environments due to the lack of technology maturity that are able to deal with all road types (missing lines, construction areas, reckless road users behaviour like scooters, etc.).
Therefore, the development of alternative solutions at a large scale needs other scientific foundations and technological breakthroughs. Car makers, suppliers, infrastructure operators and academics across the world are working today on ways to make driving safer, more comfortable, more efficient and more inclusive through automation, and the race is on to bring the technology to the mass market.
In this context Inria and Valeo are internationally distinguished players especially thanks to their R&D activities on automated unmanned vehicles, Cybercars and more generally on the development of advanced intelligent sensors-based decision systems.
Motivation
Partners in numerous collaborative research projects and bilateral projects, Inria and Valeo have also collaborated in the supervision of doctoral and post-doctoral students. Many Inria researchers have also joined Valeo's R&D teams for several years. Finally, numerous technology transfer actions and joint patent applications have taken place. Motivated by this very strong collaboration for over 15 years, Inria and Valeo wanted to formalize this synergy by strengthening their links, both in the fields of research and technology transfer.
What could be better than to create a joint research team to share the same visions on mobility and transport automation? And what could be better than working together upstream on breakthrough research topics? This naturally resulted in the creation of a joint research team: the ASTRA team. This team brings together talents from three entities: the former RITS team at Inria (Paris), members of the DAR team at Valeo (Créteil) and members at Valeo.ai (Paris). Beyond the strategic vision assumed by the management of these three entities, the France Relance national plan was an important incentive for the creation of this unusual joint entity.
3 Research program
Today, there are still many challenges facing the development and deployment of autonomous vehicles to reach an exploitable and commercially viable solution. This is due equally to technical and non-technical challenges. In particular, the challenges include aspects related to the performance of the systems, their efficiency, their integrability and their costs, not to mention the legal, social and ethical aspects.
A classic robust autonomous navigation architecture should take into account additional aspects related to real-time implementation, functional redundancy, durability, certification and purely technical aspects related to the design and development of functional bricks as well.
As part of this project-team we focus mainly on developments related to automated sensor-based navigation. The other aspects are be dealt with in the framework of collaborations and exchanges with other academic, industrial and institutional partners. Therefore, we focus on four research topics that are central to autonomous navigation and a major focus point for the scientific and technical communities. These components are: perception and understanding of the scene, decision systems and vehicle control, cooperative driving and system modeling. These components are linked one another through a complex but straightforward architecture depicted in Fig. 1.

Obviously, the ability to perceive and understand the scene is the starting point of any navigation architecture since it represents the first step of processing sensory data, capturing the world state, and creating the internal digital representations of the decision system. The latter relies on these representations, on the ego vehicle localization and the positions of other road users and on contextual data to build decision schemes which include maneuvers planning and trajectory generation. The control-command loop is then responsible of the execution of the trajectories by the generation of control laws that control the vehicle's actuators.
All these modules interact as shown in Fig. 1 and ensure an autonomous but individual navigation of a vehicle. However, it is important to study the behavior of these vehicles and their performance when the penetration rate (i.e., their ratio to total traffic) of these vehicles becomes critical. It is also very interesting to study the interactions between these vehicles and their potential cooperation. This is called cooperative driving; it can only take place in the presence of connectivity. The latter also ensures interaction and cooperation between autonomous vehicles and infrastructure. The benefits of this type of cooperation are significant, both in terms of the individual performance of each vehicle but also of the overall performance of the vehicle fleet and traffic in general.
3.1 Research Axis 1: Vision and 3D Perception for Scene Understanding
Navigation for mobile robotics requires a robust understanding of the environment from 2D or 3D sensors. Recent learning-based vision algorithms are now able to operate in highly cluttered environments, and tasks which were considered challenging — such as semantic segmentation or object detection — are soon to be solved to a certain extent. Still, the classical supervision paradigm, which relies on large annotated datasets, cannot encompass in practice all outdoor conditions and scenarios. There is therefore a need both to relax the requirement of massive annotations and to extend the perception capability to situations unseen or rarely seen in the training data.
To that aim, in this research axis, we investigate several broad topics. First, we transversely investigate learning with less supervision with applications to various perception tasks. Focusing on outdoor vision, we conduct research relying on data-driven or physics-guided paradigms to hallucinate complex lighting/weather conditions and compensate for missing data in the training sets. Because mobile robots evolve in the physical world we also investigate how vision algorithms can provide in-depth 3D understanding of the scene from images and/or LiDAR scans.
To evaluate our research as well as to foster reproducibility, we rely on relevant recent public datasets (nuScenes 51, Waymo Open 142, Woodscapes 153, SemanticKITTI 42, CADCD 130, etc.) and intend to openly share our research results.
3.1.1 Learning with less supervision
It is now widely accepted that supervised learning is a long-term dead end for computer vision. It relies on costly human- biased annotations, which will soon be unbearable with regard to the ever-increasing size of datasets, trying to cover data diversity. To circumvent the need for labels, strategies have been developed where a trained model is either (almost) directly applicable to unseen conditions (i.e., zero-/few-shot learning) or finetuned on a target domain (i.e., domain adaptation). On the need of data, we investigate automatic generation of data with Generative Adversarial Networks (GANs). Following recent work from the group members 94,8,145, 146, 132, 131, 151, 122, we contribute to these research directions, investigating the remaining scientific locks that are detailed below.
Regarding zero-shot learning, we observe that current methods are limited by the low amount of geometric information featured in the embeddings that are used as auxiliary information; we therefore boost this geometric information in the embeddings, for example by jointly using text and images. As for few-shot learning, we use high-contrast dictionary-based approaches where generalization is controlled by the level of sparsity. We are also interested in category-agnostic models that can operate on (e.g., detect, segment) arbitrary objects, or that can adapt online to information retrieved from databases of rare objects. We build upon recent progress in representation learning to enforce separable features representations 96 while enforcing orthogonality of features 144. Besides, we investigate both zero- and few-shot learning in the context of a complete perception pipeline, instead of focusing on individual vision tasks as commonly done. In both cases, we will also investigate the use of multiple views and multiple modalities (using both images and LiDAR scans).
Concerning domain adaptation, common unsupervised strategy exploits resemblance between a source and a target domain using a self-supervised signal (e.g., pseudo labels 106) to discover statistics in the target domain. However, when the domain gap is too big, the model adaptation leads to sub-optimal minima 154, 54. To accommodate bigger domain gaps, we investigate the discovery of new statistics with the support of several modalities (e.g., both 2D and 3D), for a variety of tasks (e.g., semantics, depth and normal estimation). Regarding representation learning, we focus on disentangling latent space representations, working towards domain-invariant features by enforcing orthogonality of the domain features while enabling the discovery of exclusive task/domain features. We study bridging zero-/few-shot to the domain adaptation paradigm, investigating the open domain adaptation setting that accounts for novel unseen domains such as 114, 48.
Finally, to relax the need of training data we investigate automatic data generation with image-to-image (i2i) translations and style-transfer techniques, which both can help training in self-supervision settings 43, 131, 105. We observe that GANs commonly lack diversity and controllability in the generated data. To that aim, we study multi-domain setups 56 and automatic discovery of domain attributes 87 to foster controllable latent representations. We fight the lack of diversity in the generated datasets 43 with continuous 148 and multi-modal 131 strategies. Besides standard metrics, we also evaluate the quality of our generated data by training proxy vision tasks.
3.1.2 Vision in complex conditions
The wide variety and continual physical nature of physics prevent any dataset to encompass all lighting and weather conditions. Most outdoor datasets account exclusively for data recorded in clear weather daytime while only a handful of them include adverse conditions. In fact, regardless of the recording complexity some conditions are unlikely to be included in any dataset due to their inherent rarity (e.g., snow storm at sunset). Because they lead to drastically varying appearances we focus here on changing weathers, seasons and lighting conditions; with the complimentary goals to improve robustness of vision algorithms and to automatically assess failures cases.
Rather than agnostic data-driven models, we study training with a priori knowledge, with the ultimate goal to get representations invariant to these conditions. To compensate for the scarcity of data as well as to generalize training to unseen conditions, we rely on physics-guided learning to ease and accommodate the discovery of statistics. We rely here on physical guidance to discover the continuous underlying manifold where data lives 13. Using physical models to guide the training helps vision algorithms to accommodate better to partial or imbalanced distribution in the training set, as well as to better extrapolate to unseen conditions. We are focusing on invariant representations that can improve both the image translation setup and proxy vision tasks (segmentation, objects, etc.); relying on prior works from group members 13, 134, 16, 14.
Sometimes, weather conditions go even beyond the sensing capabilities of sensors, e.g., sun glare or very dark scenes can reduce dramatically the perception of standard cameras. In such cases, robustness is difficult to attain and the system should rather trigger an alert or fail gracefully. Unseen weather conditions encountered at runtime can be regarded as a dataset/distribution shift and can be addressed with predictive uncertainty estimation methods 127. Through a Bayesian lens we study and devise strategies for automatic assessment and detection of dataset drifts by leveraging approximate ensembles 116, 37, 70, observer networks 62, 88, and complementary information from other sensors 44. We rely on prior findings and works from group members 62, 70, 69,16, 134.
On application, we evaluate robustness of the proposed methods on core vision tasks of recent adverse weather datasets 138, 155, 142, 51, 44.
3.1.3 3D scene understanding
Robots still commonly lack the natural ability of humans to estimate the fine-grained geometry of a scene while understanding object interactions and reasoning beyond their field of view. To provide accurate geometry, 3D active sensors such as LiDARs are commonly used in autonomous driving 92, but they only provide a sparse sensing of the scene. In this third topic, we seek a fine-grained geometrical/semantics 3D understanding of the scene with or without 3D sensing, while also relying on frugal supervision. This topic benefits from prior work of group members 133, 46, 45, 94,15,93, 152, 100, 52.
Building up on recent methods 46, 45, 143, 109, 82 that efficiently convolve point clouds, we look forward at improving 3D tasks (detection, segmentation, etc.) relying on contextual priors. Furthermore, we address 3D generative tasks like point cloud up-sampling, completion and generation, as well as surface reconstruction, which provides important navigation cues for robotics, and can also assist the human driver in augmented reality scenarios, particularly in adverse conditions. Temporally consecutive point clouds will also be leveraged to disambiguate occlusions and provide denser scene sensing 133, 52. Regarding richer scene representations, we study the intertwined relation of geometry and semantics 140 through the semantic scene completion task 15,136, 135, which gained growing interest lately 42.
Another line of study is the interaction between modalities of different nature like for scene understanding, in particular the complementarity of 2D images and 3D scans. We study how multi-modal features can jointly improve performance of core tasks, but also how it can lead to improving the performance of single modalities by exploiting cross-modal features as self-supervision 94,8.
Besides the use of 3D devices, we also investigate 3D understanding from 2D images. As they originate from passive sensors, images carry less obvious geometrical cues but humans are still able to estimate depth and understand 3D from a photograph, heavily reasoning on learned priors. We study here challenging tasks like scene reconstruction or 6-DOF localization, which can be conveniently self-supervised from either 3D sensing or sequential data.
3.2 Research Axis 2: Localization & Mapping
Vehicle localization and environmental mapping are pillars of the perception task for an autonomous vehicle. While vehicle localization ensures the global positioning of the vehicle in its environment and local positioning with regard to the road and to the close road features, environment mapping contributes in building a useful internal representation that is exploited by the decision system.
Inria and Valeo teams have been working - separately and jointly - on the localization and mapping solutions for over the past 15 years. Many algorithms have been developed and showed their effectiveness in terms of accuracy, precision and safety expectations for autonomous driving. However, the integrity, safety, data size and costs are still challenging points that ASTRA wants to address while pursuing research on localization and pose registration using single/multisensor approaches.
3.2.1 Localization and Map Integrity
Many localization methods were developed mainly based on Particle Filter and GraphSLAM together with a point cloud representation of the environment. These solutions mainly focus on the accuracy and precision requirements of the pose estimations. Yet, the integrity of localization and integrity of maps used for localization are critical to ensure a safe use of the localization system for autonomous driving. State-of-the-art methods on localization integrity usually proceed by: 1. employing Fault Detection and Isolation algorithms (FDI) to remove outliers from input data. 2. computing Protection Levels (PL) to qualify the integrity zone 10386104 or by calculating the Protection Levels (without FDI) such as in 11239. Maps integrity is highly related to the feasibility to find a distinctive matching when using the map for localization. Indeed the map can be explored by an algorithm that aims to identify the zones or sections that represent a potential ambiguity for matching algorithms such as in 89.
3.2.2 Online Alignment of Multiple Map Layers
A wide diversity of maps that are dedicated to vehicle’s localization are nowadays available. These maps are different from each other regarding different key localization features. The most important aspects may be: the structure of the representation (e.g., grid, graph etc.), the underlying theory to represent the information of the environment (e.g., occupancy probabilities, landmarks, etc), and the sensor used to collect information (LiDAR, camera, etc). Map providers, such as Here and TomTom, usually provide maps with different layers to encode different information that are relevant to ADS features (Road model, lanes, and road features). Valeo, having the advantage of being the leader of automotive LiDAR sensor, wants to enhance his ADS solutions arsenal as a map provider by providing a map service based on the laser point clouds and potentially other information layers that are relevant to ADS. For this purpose it is important to find correspondences and align different map layers with other maps from maps providers. This subject is addressed by considering semantic information that can be extracted from heterogeneous sensors and maps data such as in [9] and [10].
3.2.3 Georeferencing of maps without RTK GNSS and IMU
Highly accurate maps that are used for AD localization are usually built using a very expensive Fusion box that includes a very precise RTK_GPS receiver and a first grade IMU. These solutions for map building are very expensive and require deployment of RTK bases in the environment to receive the corrections which imply extra cost. The idea of this subject is to be able to use available sensors (such as standard GNSS, IMU, CAN, LiDAR, Camera) and possibly maps from other providers to build a highly accurate (in the global reference) map using point clouds. Different inputs from sensors and maps can be considered together with an asynchronous fusion method to build an accurate estimation [11]. The method to achieve this goal constitutes the subject of this study.
3.3 Research Axis 3: Decision making, motion Planning & vehicle Control
Decision-making, maneuver and motion planning, and vehicle control are extremely vital components of the intelligent vehicle. These modules act as a bridge, connecting the perception subsystem of the environment and the bottom-level control subsystem in charge of the execution of the motion. We address these issues covering various strategies of designing the decision-making, trajectory planning, and tracking control, as well as shared driving of the human-automation to adapt to different levels of the automated driving system accounting with the driver profile.
The challenges related to decision making and path planning are mainly related to four distinct elements:
- Errors and uncertainties introduced by the perception subsystems
- Environment static and dynamic occlusions
- Lack of understanding and prediction of other road users behaviors
- Simultaneous consideration of several constraints related to: vehicles dynamics, energy consumption, passengers comfort, offense to driving rule...
Different approaches are investigated in the state of the art addressing one or several issues but, to our knowledge, none are capable of addressing all of them simultaneously. More specifically in most approaches decision and planning are dealt separately or in a way that favors one of them. Approaches based on Markov decision process (MPD, POMDP,...), path-speed profiles, ontologies, artificial potential fields coupled to MPC controllers are able to show interesting results in dedicated environments or in specific situations, however most of them do not tackle properly specific issues such as intention and behavior predictions, interactions or multi-criteria real time optimal maneuver decision.
While continuing the investigation of end-to-end driving approaches based (inverse-)reinforcement learning decision-making approaches, we keep on improving current path-planning methods already developed by both teams at RITS and DAR: Reachable Interaction Sets 41, Artificial Potentials Fields (coupled to MPC control) which are designed for obstacle avoidance, as well as traditional path planning methods. Optimal methods based on the convex optimization and cubic splines are investigated at DAR to design optimized and robust trajectories. More specifically, we are mainly focusing on the following three scientific topics (detailed in the next sections):
- Maneuvers and trajectories prediction of surrounding road users
- Schemes for ego-vehicle actions and maneuvers decision making and motion planning
- Motion planning and trajectories generation
3.3.1 Maneuver and trajectory prediction
To achieve a safe and comfortable driving, an autonomous driving system must have an accurate knowledge of the future motions of all other traffic agents surrounding the autonomous vehicle, such as cars, pedestrians, cyclists, etc. Motion prediction is thus a key task in autonomous vehicles. Several methods of motion prediction have been studied in the literature. Lefèvre et al 107 propose their classification in three levels with an increasing degree of abstraction: Physics-based models, Maneuver-based models and interaction-based models.
- Physics-based motion models. They consider that the motion of vehicles only depends on the laws of physics. The future motion is predicted using dynamic and kinematic models linking some control inputs car properties and external conditions. These models are limited to short term prediction and are unable to anticipate any change in the motion of the car caused by the execution of a particular maneuver.
- Maneuver-based motion models. They consider that the future motion of a vehicle also depends on the maneuver that the driver intends to perform. The future motion of a vehicle on the road network corresponds to a series of maneuvers executed independently from the other vehicles. These models are Unadaptable to different road layouts.
-
Interaction-aware motion models. They take into account the inter-dependencies between vehicles’ maneuvers. These models require computing all the potential trajectories of the vehicles which is computationally expensive and no compatible with real-time risk assessment. Valeo has filed a patent to overcome this issue 149. This patented method is being developed in order to be tested in the automated driving prototypes.
Fig. 2 shows a comparison of the different models including their challenges and the used algorithms.

Motion prediction models comparison
Valeo has considered these categories in its development of the automated driving prototypes Cruise4U and Drive4U. The physical-based model is used in situations when their is no knowledge about the route geometry (for example in a big roundabout without lanes), the maneuver-based in highway and urban environment when the road topology is available from HD Map or valeo Drive4U Locate map.
In the few last years, machine learning based algorithms and particularly deep learning are used in order to solve the limits of the current prediction methods. Human motion trajectory prediction has been addressed in the literature 47, 137. A large amount of naturalistic road user trajectories in different contexts (highways 60, 61, 97 or urban 51, 53) needed to train and evaluate deep learning methods are now available. Our first works 12,121,11,119, taking as input the track history of a target vehicle and of its surrounding moving road users, obtained accurate prediction results of the target vehicle motion on highways and an extension 120, including the static scene structure, has been proposed for an urban context. Valeo is involved in this research area with activities in prediction of other road users and ego-vehicle trajectory. Different approaches have been implemented and tested in simulation and on test cars 50, 49.
However, work has still to be done in this domain in terms of performance, robustness and generalization before being used in real autonomous driving applications. In fact, the behavior of a human driver depends also on the contextual knowledge of the environment (speed limits, traffic density, day of the week, visibility, road equipment, driver's country, etc.) and on its goal 157. We plan to include these contextual cues in a prediction method, which should also compute multiple plausible trajectories representing the driver's diverse possible behaviors, give uncertainties estimations on the predictions, carry out multi-agents trajectory forecasts and should be usable in any environment. It will necessitate the use of a more complete dataset 156 composed of various driving scenarios collected from different countries, which may be completed by our own dataset collected with the help of Valeo if necessary. This work will be done in collaboration with Itheri Yahiaoui from Reims University and within the starting PhD thesis of Amina Ghoul funded by the SAMBA project.
3.3.2 Ego-vehicle actions and maneuvers decision making
The most important component of an autonomous vehicle navigation system is the decision system that elaborates the coming tactical actions and maneuvers to be executed. The selection of the optimal maneuver should be the result of relevant and simultaneous consideration of several factors. These factors are mainly: safety and risk assessment, respect of the dynamic constraints of the vehicle and its controllability, uncertainties related to the perception outputs, nearby uncertain interactions with/between close road users, and finally the criterion related to the navigation objectives such as journey duration minimization, driver/passenger comfort, fuel/energy consumption minimization, respect of driving rules, etc. The latter being expressed in terms of kinematics constraints.
In the literature, there are very few approaches describing unified decision architectures capable of taking into account all of the considerations mentioned above. Most approaches are developing planning schemes which separate motion generation and decision making. In these approaches, motion planning (including reactive planning) usually exploits geometry, configuration spaces and other optimization techniques. Decision making schemes rely on AI logic based approaches such as rule based 126, decision trees 58, 110, Finite Set Machines 158, utility-based approaches, Bayesian Networks and Markov Decision Processes like approaches (MDP, POMDP…), AI heuristics algorithms (SVM’s and evolutionary methods) or AI approximate reasoning methods (fuzzy logic) and Artificial Neural Networks (CNN’s, Reinforcement Learning…) 111, 147, 55. In 59 propose an architecture that provides an optimization of the motion generation using the decision making function as the evaluation function, the aggregation of fuzzy logic and belief theory allowing decision making on heterogeneous criteria and uncertain data.
In the coming period we will work on unified architectures, that tackles simultaneously decision making and motion planning. Very likely, one approach will focus on deep learning techniques based on reinforcement learning and inverse reinforcement learning where we deem a (dense) reward function that is suitable for a large class of behavioural planning tasks. More generally, we will investigate model-free and model-based approaches where some interesting approaches have already been initiated and showed interesting results such as in 108. In particular, in order to better evaluate safety costs, we will take as input the output of the maneuvers and trajectories prediction system described in the previous section, which has the advantage to better estimate the road users trajectories thanks to attention mechanisms that encode interactions and behaviors. This work is done within the PhD of Mr. Islem KOBBI.
Another different approach will still investigate a utility-based approach that is easier to explain. First results were already obtained thanks to the work developed in the framework of the thesis of Mr. Karim ESSALMI; it is based on the Conservation Of Resources theory that we adapted to decision making.
3.3.3 Trajectory planning
State of the art on motion planning techniques have been mainly focusing on methods generating the geometric path first, and then applying a speed profile to the generated path. To mention just a few, this approach has been tackled by the following methods (or combinations): interpolating curve-based 78, 79, graph-search based 113, sampling-based 98 and optimization-based 83.
From the motion planning point of view, the inclusion of human factors is a key element for increasing the acceptance of the automated vehicle behavior and for providing a more human-like response. For that purpose, the use of data from real drivers should be envisaged to better define the motion constraints in dynamic environments, allowing to adapt the trajectories to any specific road scenario (intersections, roundabouts, merging, overtaking, lane driving, etc). For instance, motion constraints such as longitudinal and lateral accelerations as well as jerks should be properly taken into account in the generation of a human-like speed-profile, as introduced in 38.
Furthermore, the inclusion of driving factors such as energy consumption or the traffic occupancy should be considered in the multi-criteria optimization for better adapting to any driving situation. This would help to reduce the driving time (such as the commute time) or even reduce the energetic consumption and the stress of both driver and car passengers by reducing the traffic jams and the corresponding repetitive acceleration and braking maneuvers.
Finally, this planning module must fit to the time constraints for its execution in real-time to ensure safety. Thus, a complete and rapid motion planning approach is needed; it should consider the functional safety to generate real-time collision-free trajectories considering the different interactions with the surrounding vehicles to be tracked by the control. For that purpose, works presented at 40 will be extended in order to consider the interaction among the several surrounding road users as one and not as individual interactions, investigating the risk assessment metric that is the most appropriate for each specific scenario.
3.3.4 Robust control of automated vehicles
In order to execute safely a planned trajectory or a reactive maneuver, it is essential that the vehicle executes these trajectories taking into account the vehicle dynamics while ensuring safe, stable and comfortable maneuvers. A tremendous effort was deployed the last 10 years by the team partners in the area of motion planning and intelligent control. Seven PhD thesis were dedicated to the important problem of path and motion planning as well as on corresponding control-command. All are addressing the navigation of autonomous vehicles in structured but complex environments. Harsh configurations such as intersections and roundabouts need specific planning approaches taking into account the geometry and the topology of the places, but also the dynamic and kinematic constraints of each ego-vehicle and as the safety and comfort constraints.
Previously, RITS team (Inria) also implemented specific control algorithms dedicated to specific road maneuvers such as overtaking 128 and parking maneuvers 129. Control laws were designed with the theoretical proof of stability and optimality. Very interesting results were obtained in two major domains, mainly related to the controllability and stability of dynamic complex systems which are key aspects when it comes to design intelligent control algorithms for vehicles:
- Plug&Play control for highly non-linear systems: Stability analysis of autonomous vehicles. The developed Plug&Play control is able to provide stability responses for autonomous vehicles under uncontrolled circumstances, including modifications on the input/output sensors. Former RITS team was among the very first to investigate these theories for automotive applications. They were Investigated in the PhD thesis of Mr. F. Navas 124 and I. Mahtout 115. The approach deals with: the reconfiguration of existing controllers whenever changes are introduced in the system (gain scheduling), online closed loop identification of the vehicle and its components, and Automatic control reconfiguration to achieve optimal performance 12510.
- Fractional Calculus for Cooperative Car Following Control A Car-Following gap regulation controller using fractional order calculus, has been developed and has been proven to yield a more accurate description of real processes and ensure string stability of the platoons or the vehicles involved in a Cooperative Autonomous Cruise Control 66. In an effort to combine fractional calculus robust control with plug&play control, a multi-model adaptive control (MMAC) algorithm based on Youla-Kucera (YK) theory to deal with heterogeneity in cooperative adaptive cruise control (CACC) systems was proposed67.
ASTRA will evolve by introducing intelligent cooperation between vehicles and, at the same time, autonomously driving the vehicle in a human driver way (increasing driver acceptability) but with the safety and accuracy of optimized control algorithms. To achieve this, we will rely on the existing approaches developed so far but no further research will be conducted in the lifetime of the joint team. This is mainly due to the absence of a senior researcher at ASTRA capable of carrying this topic independently at a high level. This also motivates the team to seek to recruit a new confirmed researcher in the field of the control of dynamic systems, a crucial domain for a team willing to develop and deploy advanced control architectures on real mobile platforms. In the meanwhile it would be very interesting to envisage collaborations with other Inria teams working on similar topics. A perfect example is DISCO team (Inria Saclay Research Center, head: Mrs. Catherine Bonnet). Among others, the research interests of DISCO cover: the realization and reduction of infinite-dimensional systems, Robust
This research direction comprises a big interaction with the research axis: Large scale modeling and deployment of mobility systems in Smart Cities. The former will be essential when developing control algorithms that rely on a very small communication delay for getting a stable latency, designing stable systems. The latter will serve to analyze the effect over the traffic flow from a developed algorithm, moving from the validation of a proposed controller in a limited number of vehicles to a its study from a macroscopic perspective.
3.4 Research Axis 4: Large scale modeling and deployment of mobility systems in Smart Cities
While axes 1 to 3 deal with subjects related to the on-board intelligence of an “individual” intelligent vehicle and its autonomous navigation, axis 4 intervenes when it comes to many communicating, autonomous or automated vehicles but also when it comes to the cooperation with the static environment (infrastructure). The latter may contain and integrate: roadside and monitoring sensors (Cooperative Perception Services), signaling, communication infrastructures, cloud... The research concerns in particular the deployment of equipped vehicles on a large scale in a road or urban environment.
The research objectives are twofold.
-
First, the focus is on the modeling of systems comprising a large number of vehicles, often seen as random entities.
The methodology is mainly to explore the links between large random systems and statistical physics. This approach proves very powerful, both for macroscopic (fleet management 64) and microscopic (car-level description of traffic, formation of jams 72, 139, 77, 76) analysis. The general setting is mathematical modelling of large systems (typically in the so-called thermodynamical limit), without any a priori restriction: networks, random graphs, etc. One often aims at establishing a classification based on criteria of a twofold nature: quantitative (performance, throughput, etc) and qualitative (stability, asymptotic behavior, phase transition, complexity).
- The second objective concerns the cooperation of these communicating entities in order to address the efficiency and safety of mobility. This cooperation takes several forms. Direct or indirect communications (V2X) are dedicated to maneuver coordination, taming and improving traffic efficiency (cf. section 4.4.2), platooning, safety critical distributed coordination (cf. 4.4.3)... Crowdsourcing is another aspect that could be used for traffic modeling and prediction (cf. 4.4.1), environment augmented mapping, or global vehicles localization. A Phd student will be hired this year to work on this precise subject (cf. 4.5).
Beside this core methodology, other past activities of interest include discrete event simulation 57, 102 and resource allocation for ITS 101, 84, 85.
Finally, axis 4 does not represent a structural unit like the other axes. Its objective is to deal with the problem of scaling, deployment and multi-vehicle cooperation in a global and systemic way. On the substance, methods and theories of modeling will be investigated and the design of secure telecommunication systems will be elaborated. These models and systems are intended to be implemented in more global systems and architectures. They will interact with the other axes through these architectures and will respond in a targeted way to needs; for example, whenever a need for probabilistic modeling is expressed (e.g. section 4.5).
3.4.1 Traffic prediction in urban settings: detecting extreme events
A probabilistic forecasting method that can provide predictions of urban traffic at city level, accurate at short term and meaningful for a horizon of up to several hours, has been devised in the team 75, 71, 74, 117, 118, 735, in collaboration with C. Furtlehner (TAU, Inria Saclay). It is designed to leverage spatial and temporal dependency and can deal with missing data, both for training and running the model. The method consists in learning a sparse Gaussian copula of traffic variables, compatible with the Gaussian belief propagation algorithm. Results of tests performed on three urban datasets show a very good ability to predict flow variables and reasonably good performances on speed variables.
When investigating the output of the model, some rare but large errors are noticeable. It turns out that this corresponds to detectors which, for a long period, send values completely at odds with the ones observed during training. These badly behaving detectors may either correspond to corrupted ones, or to drastic changes of the traffic conditions on the corresponding segment, because of road work or accidents for instance.
One way of examining these events has been proposed in 90, and we plan to investigate whether it can be used to improve models. Separating sensor failure from extremal events is even more important, and this is what we plan to investigate in a PhD thesis, by careful analysis of the correlation structure of the model.
3.4.2 Taming highway traffic using cooperative automated vehicles
Several authors 68, 63, 150, 81 have suggested that it is possible to use a small proportion of automated vehicles to regulate highway traffic. These studies are set in a traffic regime which exhibits string instability, which means in terms of transfer function that any excitation of a frequency below a certain limit is amplified. We are interested here in a slightly different setting, where reaction time is taken into account for human drivers. We have shown 65 that the introduction of this delay involves a non rational transfer function, implying in particular that the system is not always stable. We have proposed a complete self-contained proof of stability conditions, based on classical complex analysis. Moreover, we bring to light a phase transition with a new propagation regime, named partial string stability, situated between string stability and string instability.
With these foundations established, the next steps are to devise a traffic stabilization scheme by means of a fleet of cooperative automated vehicles. However, contrary to the work in 68, our approach is based on a car-following model with reaction-time delay, rather than on a first order fluid model. The continuation of these studies will concern shock wave analysis and adequate traffic-stabilizing control strategies.
3.4.3 Crowdsourced mapping
The deployment of intelligent and connected vehicles, equipped with increasingly sophisticated equipment, and capable of sharing accurate positions and trajectories, is expected to lead to a substantial improvement of road safety and traffic efficiency. Nevertheless, in order to guarantee accurate positioning in all conditions, including in dense zones where GNSS signals can get degraded by multi-path effects, it is expected that sensory equipped vehicles will need to use precise maps of the environment to support their localization algorithms. Crowdsourced mapping represents a cost-effective solution to this problem, consisting in making use of measurements retrieved by multiple production vehicles equipped with standard sensors in order to build an accurate map of landmarks and maintain it up-to-date in realistic, long-term scenarios. Existing SoA crowdsourced mapping solutions rely on triangulation optimization or graph-based optimization where trade-offs between the map quality and computational scalability are still to be investigated. We propose to extend the work of 141 to improve scalability. One possible approach is to rely on a Gaussian Belief algorithm to estimate and update the position of landmarks and of the the vehicles, along with their corresponding uncertainties.
3.4.4 Cooperative automated driving involving V2X communications
Automated driving in a complex shared road requires cooperation among road entities in terms of cooperative control, cooperative perception, and cooperative path planning. This poses new research challenges that did not exist in the domain of vehicular communications e.g., communications for cooperative automated driving and intention-aware communications. Based on our experiences and know-how on mobile telecommunications, networking, and robotics domains, the ASTRA team will conduct research activities within the following domains:
- Safety critical V2V communications.
- Safety critical distributed coordination.
- Safety and performance guided V2X communication and data processing
- Vehicles' behaviors and intention-aware communications
4 Application domains
The aim of the project team is to tackle the challenges and provide breakthrough solutions for the autonomous and connected mobility. It covers the improvement of the safety, the availability and the performances of ADAS “Advanced Driver Assistance Systems” and the L3 automated systems (Traffic Jam Pilot and Highway Pilot) for privately owned vehicles as well as L4 automated systems including Robotaxi and automated transportation systems like autonomous shuttles. Enabled by 5G and the V2X connectivity in general, the extension to cooperative Automated driving and the Smart city will also be considered. There are more and more cities and highways equipping their infrastructures with sensors that can enable extended and shared perception. During the project, the developed solutions are tested for these applications. Valeo Automated Driving roadmap is addressing them through 3 programs. Cruise4U Program for multiple carriageway/highways, Drive4U for Urban environment including autonomous shuttles and eDeliver4U for last mile delivery as shown in Fig. 3.



The Cruise4U and Drive4U programs allowed to Valeo to perform open roads experiments around the world with more than 200,000 km accumulated in real conditions with plenty of use cases.
Fig. 4 shows a part of the Cruise4U experiments, while Fig. 5 shows world premieres: Drive4U open road experiments with only Valeo serial production sensors operating in Paris, Las Vegas and Tokyo.
A dedicated Automated Driving platform for the project team is under discussion in order to allow quick and easy integration, tests and validations of the Joint team developments.
5 Highlights of the year
- Raoul de Charette passed “Directeur de Recherche”', classe 2
- Raoul de Charette co-founded the 1st African Computer Vision Summer School (ACVSS) held at Microsoft Research, Nairobi, Kenya in July 14-24 2024. Webpage. The event included 17 renown speakers from MIT, Google, UC Berkeley, DeepMind, Inria, Microsoft and more, and was officially sponsored by Inria.
- Raoul de Charette is a fellow of PRAIRIE-PSAI.
- Fawzi Nashashibi has been nominated Program Chair of the 2025 IEEE Intelligent Vehicles Symposium (Cluj, Romania)
5.1 Awards
- PaSCo 19 was nominated as Best Paper Award Candidate of CVPR 2024 (0.2% of the submissions). Authors: Anh-Quan Cao (Inria), Angela Dai (TUM), Raoul de Charette (Inria).
6 New software, platforms, open data
6.1 Open data
Several works were released as opensource work in 2024:
- MaterialPalette. A method for extraction of Materials from a Single Image. https://github.com/astra-vision/MaterialPalette
- FaMix. A Simple Recipe for Language-guided Domain Generalized Segmentation. https://github.com/astra-vision/FAMix
- PaSCo. Urban 3D Panoptic Scene Completion with Uncertainty Awareness. https://github.com/astra-vision/PaSCo
- UMBRAE. Unified Multimodal Brain Decoding. https://github.com/weihaox/UMBRAE
- LatteCLIP. Unsupervised CLIP Fine-Tuning via LMM-Synthetic Texts. https://github.com/astra-vision/LatteCLIP
- ProLIP. CLIP's Visual Embedding Projector is a Few-shot Cornucopia. https://github.com/astra-vision/ProLIP
- PODA/PIDA. Domain Adaptation with a Single Vision-Language Embedding. https://github.com/astra-vision/PODA
- MaterialTransform. Material Transforms from Disentangled NeRF Representations. https://github.com/astra-vision/BRDFTransform
7 New results
7.1 3D scene reconstruction and completion
Participants: Alexandre Boulch, Anh-Quan Cao, Raoul de Charette, Matteo Marengo, Renaud Marlet, Tetiana Martyniuk, Gilles Puy.
This research direction is a long standing axis in the Astra team and before RITS. It consists of reconstructing the 3D geometry of a scene (possibly, with semantics), from a partial 3D input (eg, lidar scan) or 2D input (eg, an image). This year, most of the progress were obtained within the PhD thesis of Anh-Quan Cao. In prior works we introduced novel techniques to infer the semantically label geometry of a scene from an image 2 or from lidar scans 136, also refered as Semantic Scene Completion (SSC). In particular, we explored 3D SSC, 3D completion and 3D reconstruction all from sparse 3D measurement.
3D SSC is a semi generated tasks which applicability is limited for robotics because it does not provide information about the spatial confidence of the reconstruction. In collaboration with TUM, with Anh-Quan Cao, we extended the task of SSC and first proposed the task of panoptic semantic scene completion (PSC), which along with geometry and semantic, allows detection of instances. Our method, coined PaSCo, formulates the task as an uncertainty-aware PSC, relying on subnetworks learning different features representations. Therefore, PaSCo allows to estimate the spatial and semantic uncertainty of a scene. This has important applications for autonomous driving.
To benefit from the latest diffusion networks, which modelled denoising as Markov process, in the thesis of Tetiana Martyniuk we also explored diffusion on points and proposed a novel LiDPM method, which allowed us to identify major flaws in the literature. In a complementary direction, we also investigated how vision foundation models can be distilled for lidar processing 28.
We also investigated the hybridization of Gaussian Splatting 95 with geometric primitives for 3D reconstruction from a set of images. This work was conducted by Matteo Marengo (intern).
PaSCo 19 (webpage) was accepted at CVPR 2024, top tier conference of computer vision, and was nominated as Best Paper Award candidate (being 0.2% of the submissions). LiDPM was accepted at the 2025 Intelligent Vehicle symposium (soon online). Both works are shared opensource. Lastly, Anh-Quan Cao defended his PhD thesis in December 2024 32 revolving around 3D scene understanding.
7.2 Scene understanding with vision and language
Participants: Yasser Benigmim, Alexandre Boulch, Andrei Bursuc, Anh-Quan Cao, Raoul de Charette, Mohammad Fahes, Tuan-Hung Vu.
In the last 5 years, we have been exploring the interaction of vision and language to improve downstream tasks of visual scene understanding. This year's work was primarily focused on semantic segmentation and object classification, primarily within the thesis of Mohammad Fahes, but also with the PhD visit of Yasser Benigmim and the PhD of Anh Quan Cao. In particular, we have explored how to adapt vision language foundation model (VLMs) in a robust and parameter efficient manner.
In a first line of work, within the scope of Mohammad's thesis, we extended PODA — a prior model that demonstrated the ability to adapt to new domains by leveraging natural language 3. In our extension 33, submitted to a journal, we have shown that adaptation can be performed with any agnostic visual or language which might a prompt, a learned concept, or an image. Leveraging either of them, we learn affine transformation of the features with only a few thousands parameters, reaching extremely fast training (few seconds) for semantic segmentation. This showcases the benefit of this line of work, which already led to a few publications: 3, 22.
Since the vast majority of the literature relies on the CLIP model, we also conducted thorough experimental evaluations leading to two key observations: a) minimal adaptation of the visual-to-text projection layer can lead to a significant downstream boost, b) while the literature relies on the aggreggation of text templates, we discovered that individual templates perform differently on different semantic problem. Building on a) in Mohammad's thesis we proposed a novel approach, coined ProLIP 34, exploiting frugal learning for few-shot classification, finetuning only 5% of the parameters but leading to a large boost of performance over 11 datasets and 5 tasks. Observation b) was exploited in the work of Yasser Benigmim, which exploited the emergence of `class-experts' (individual template that outperform the aggreggation of all templates) to propose a novel training-/label-free method, named FLOSS, that outperforms the state-of-the-art. Interstingly, this plug-and-play approach allows a boost of performance on all datasets and models tested.
In a collaboration with Amazon, within Quan's thesis, we exploited the generative capability of Large Mulitmodal Models (LMMs) to annotate data unsupervisedly, which we exploited as pseudo labels for object classification 20. The resulting method, coined LatteCLIP, led to a boost of performance.
This axis of research is still on going and overall, language and other non-visual modalities, are vastly investigated in a number of our research projects. LatteCLIP was accepted at WACV 2025, and both ProLIP and FLOSS were submitted to ICCV 2025. The extension of PODA is submitted to IJCV. All of these are the top-tier venues of computer vision. All works are shared opensource.
7.3 Material edition in images
Participants: Ivan Lopes, Raoul de Charette.
In the thesis of Ivan Lopes we have explored the ability to decompose visual signals, into intrinsics maps. This is especially important to understand the nature of the materials (fabric, stone, etc.) in a scene. This problem is notoriously very complex, as it is underconstrained unless the same material is observed in multiple lighting and viewing conditions.
In a collaboration with Université Laval (Canada), we proposed a novel method to learn the transformation of Bi-directional Reflectance Distribution Functions (BRDFs, ie, implicit material representations) by leveraging a volumetrice representation of the scene (ie, NeRF). The resulting work, BRDFTransform 23 was accepted to a conference. In a prior work, MaterialPalette 24, we also exploited the capacity of diffusion models to estimate materials in images. As a follow-up, in a collaboration with Adobe (UK and Canada), we demonstrated the ability to directly edit materials in images by formulating the task as a global generation problem with a diffusion model. The results are showcasing impressive visual improvements compared to the literature. The work, named MatSwap, was recently submitted.
BRDFTransform 23 was accepted to EuroGraphics 2025, while MatSwap 36 was submitted to SIGGRAPH 2025 recently. These venues are among the best in computer graphics. All works are opensource.
7.4 Physically interpretable scene understanding
Participants: Andrei Bursuc, Raoul de Charette, Soumava Paul, Tuan-Hung Vu, Clément Weinreich.
We have explored the problem of physical plausibility and interpretability for visual scene understanding. This axis, follows major works conducted by the team exploring the use of physics model to guide or improve the training of neural networks 14, 13, 16, 7. While vision models were proven very powerful, our research in this axis aims to answer one question: Can neural network understand physics ?
In the last year, we explored two relatively small topics through internships. In the one of Clément Weinreich, we proposed a method to learn a physically explicit world model – capable of predicting the next state of the world –, by leveraging on graph neural networks. The preliminary results were very encouraging, but were not sufficient for publication. In the work of Soumava Paul, we explored the ability to build a 3D dataset annotating physics forces, point-wise, which required a major simulation labor. The current results are still preliminary but expected to be submitted a machine learning conference.
In a loosely related topic, we have explored with Weihao Xia (PhD Visit), the ability to decode brain signals to extract interpretable representation of the world. Our method, coined UMBRAE 29, is among the first method to prompt directly a model processing a brain signal (fMRI), therefore allowing visual decoding, as well as captioning, and prompting. Further, a major novelty is the ability to learn a cross-subjects representation which solve a major limitation of the literature. This collaboration with Cambridge and UCL led to a conference paper.
UMBRAE was accepted to ECCV 2024, a top-tier computer vision venue. The work of Soumava is expected to be submitted to a conference. All works are (or will be) shared opensource.
7.5 On Enhancing Intersection Applications With Misbehavior Detection and Mitigation
Participants: Jiahao Zhang, Fawzi Nashashibi.
This work has been conducted in collaboration with external partners from IRT-SystemX Institute: Ines Ben-Jemaa, Ziyi Liu and Francesca Bassi.
Collective Perception Services (CPS) enable communicating entities to share their perception data in the V2X communication network. Potential attacks on extended perception data affect the CPS and may consequently degrade the safety application that rely on collective perception data. We build in 31 an architecture that allows the integration of misbehavior detection and mitigation mechanisms with the CPS. We implement the Intersection Movement Assist (IMA) application that uses the extended perception data to calculate potential collision risks in intersection areas. We define specific safety metrics and through extensive simulations in large scale scenarios, we quantify the impact of a large number of attacks and of misbehavior detection on the safety application. Our evaluation demonstrates the ability of misbehavior detection and mitigation mechanisms to filter malicious shard perception data and consequently the benefits of using such mechanisms in improving the robustness of the safety application in complex road scenarios.
These results rely on a unified simulation framework 30 made available to the research community that enables exploration and development of misbehavior detection and mitigation solutions as integrated parts of the CPS in various scenarios. We demonstrate the effectiveness of our framework in generating performance results and provide the corresponding datasets.
7.6 PathDCM: An interpretable path-based trajectory prediction model
Participants: Amina Ghoul, Fawzi Nashashibi, Itheri Yahiaoui.
To navigate traffic safely while providing passengers with a smooth ride, autonomous vehicles must accurately predict the trajectories of surrounding agents. Predicting future trajectories is inherently uncertain and complex, as agent movements are highly non-linear over longer prediction horizons. Moreover, the distribution of possible future trajectories is multimodal—agents may have several plausible goals and different paths to reach each goal.
Despite these challenges, agent motion is not entirely unconstrained. Vehicles generally follow the direction of their lanes, obey traffic signals, and make legal turns and lane changes. Bicyclists tend to stay in bike lanes, while pedestrians usually walk along sidewalks and crosswalks. High-definition (HD) maps of traffic scenes capture these constraints, making them a critical component of autonomous driving datasets. Many studies have shown that predicting map-compliant trajectories—those that adhere to road boundaries and traffic rules—is essential for real-world autonomous driving systems.
Previous works have utilized HD maps for trajectory prediction in two main ways. First, HD maps are often used as inputs to models. Early approaches employed rasterized HD maps with CNN encoders, while more recent methods use vectorized HD maps with PointNet encoders, graph neural networks, or transformer layers. These map encodings are then passed to a multimodal prediction header that generates multiple future trajectories along with their probabilities. However, a limitation of this approach is that the prediction headers must learn a complex one-to-many mapping from the scene context to various possible trajectories, which can result in non-map-compliant predictions. To address this issue, recent research has turned to goal-based prediction models. These models associate each mode of the trajectory distribution with a 2D goal location sampled from the HD map. They predict a discrete distribution over these goals and generate trajectories conditioned on each goal. This simplifies the multimodal prediction task and makes each trajectory mode more interpretable. However, using a single 2D goal location as a conditioning factor often leads to imprecise or non-compliant trajectories that may go off-road or break traffic rules.
We introduce PathDCM, a novel approach that combines an interpretable, socially-aware framework with goal representations informed by map-aware road geometry. Unlike traditional methods that primarily condition trajectory predictions on future goals, our approach leverages the paths leading to those goals. This ensures that predictions remain physically plausible and reachable.
In addition, to achieve interpretability, our method integrates a knowledge-based discrete choice model (DCM) with a neural network. The DCM provides interpretable, rule-based patterns that explain high-level decision-making, while the neural network offers flexibility and predictive power. This hybrid approach allows us to validate predictions in safety-critical applications by ensuring that the model's decisions are both accurate and comprehensible.
PathDCM employs a three-step process: predicting goals using a hybrid approach combining knowledge-based and neural network techniques, identifying feasible paths, and generating trajectories. Our experimental evaluation on the nuScenes dataset underscores the accuracy and practical utility of our approach.
7.7 Fast maneuver recovery from aerial observation: trajectory clustering and outliers rejection
Participants: Nelson De Moura, Fernando Garrido, Augustin Gervreau, Fawzi Nashashibi.
The implementation of road user models that realistically reproduce a credible behavior in a multi-agent simulation is still an open problem. A data-driven approach consists on to deduce behaviors that may exist in real situation to obtain different types of trajectories from a large set of observations. The data, and its classification, could then be used to train models capable to extrapolate such behavior. Cars and two different types of Vulnerable Road Users (VRU) will be considered by the trajectory clustering methods proposed: pedestrians and cyclists. The results reported in 25 evaluate methods to extract well-defined trajectory classes from raw data without the use of map information while also separating “eccentric” or incomplete trajectories from the ones that are complete and representative in any scenario. Two environments will serve as test for the methods develop, three different intersections and one roundabout. The resulting clusters of trajectories can then be used for prediction or learning tasks or discarded if it is composed by outliers.
7.8 Improving behavior profile discovery for vehicles
Participants: Nelson De Moura, Fernando Garrido, Fawzi Nashashibi.
Multiple approaches have already been proposed to mimic real driver behaviors in simulation. A new one is proposed in 26, based solely on the exploration of undisturbed observation of intersections. From them, the behavior profiles for each macro-maneuver will be discovered. Using the macro-maneuvers already identified in previous works, a comparison method between trajectories with different lengths using an Extended Kalman Filter (EKF) is proposed, which combined with an Expectation-Maximization (EM) inspired method, defines the different clusters that represent the behaviors observed. This is also paired with a Kullback-Liebler divergent (KL) criteria to define when the clusters need to be split or merged. Finally, the behaviors for each macro-maneuver are determined by each cluster discovered, without using any map information about the environment and being dynamically consistent with vehicle motion. By observation, it becomes clear that the two main factors for driver's behavior are their assertiveness and interaction with other road users.
7.9 Adapting COR-MP to Various Driver Profiles
Participants: Karim Essalmi, Fernando Garrido, Fawzi Nashashibi.
As driverless cars will share the road with human drivers, it is essential to understand how humans make driving decisions to facilitate smoother collaboration and adaptation between both. It is clear that each road user behaves differently while driving: some act aggressively, while others are more conservative. To account for these varying driving behaviors, we improve our previous model, COR-MP (Conservation of Resources Model for Maneuver Planning) 21, by incorporating fuzzy logic to compute certain model parameters. In this work, we defined three distinct driver profiles: agile (behaving more aggressively), conservative (driving safely, sometimes too conservatively), and moderate (striking a good balance between the two). This profiling leads to different outcomes for the same scenario, depending on the driver profile, and results in more human-like decision-making.
7.10 Extended Horizon Planning for Tactical Decision-Making for Automated Driving
Participants: Karim Essalmi, Fernando Garrido, Fawzi Nashashibi.
Traditional decision-making algorithms are often limited by their fixed planning horizons, typically up to 6 seconds for classical approaches and 3 seconds for learning-based methods, which restrict their adaptability in particular dynamic driving scenarios. However, planning needs to be done well in advance in environments such as highways, roundabouts, and exits to ensure safe and efficient maneuvers. To address this challenge, we propose a hybrid method that integrates Monte Carlo Tree Search (MCTS) with our prior utility-based framework, COR-MP (Conservation of Resources Model for Maneuver Planning) 21. This combination enables long-term, real-time decision-making, significantly improving the ability to plan a sequence of maneuvers over extended horizons while avoiding the 'robot-frozen' phenomenon.
7.11 Communicating Autonomous Intelligent Vehicles
Participants: Gérard Le Lann.
Cyberthreats directed at radio communications cannot be ignored when addressing safety issues arising with risk-prone maneuvers. To be specific, consider unsignalized intersections (UIs) and communicating autonomous vehicles (CAVs). Most published solutions for the problem of how to achieve safe and efficient crossings in the presence of radio cyberattacks rest on assuming that destructions or corruptions of radio messages can be handled appropriately, i.e., correctly and on time.
These are fragile assumptions. Safety is inevitably compromised under cyberattacks aimed at individual messages. Safety in UI scenarios is a particular instance of the well-known global state problem in distributed systems. Safety cannot hold if CAVs in approach do not share the same global state (CAVs located on entrant road arteries and intending to cross). Building distributed global states (as they evolve over time) to be “seen” identically by all CAVs is impossible in the presence of selective destructions or corruptions of messages.
Thus, the problem: how to make use of radio communications without relying on message passing? There is a solution based on a cyber-physical construct that matches the setting (UI crossing by CAVs) for arbitrary intersections (any number of entrant road arteries, and any number of lanes in every road artery).
Another issue of sociological importance has arisen with the emergence of AI and the much-debated singularity postulate. According to supporters of the singularity concept, levels of universal cognition, consciousness, and reasoning capabilities of AI can only get higher over time. To such an extent that humans will inevitably end up being dominated (i.e., enslaved) by perverted AI able to set up offensive, potentially lethal, strategies unbeknown to humans.
One counter-argument to the singularity postulate rests on observing that lives or physical integrity of humans can hardly be threatened by software entities residing in cyberspace. The issue gets more intriguing when considering AI physical agents (AIP agents), i.e., AI agents that can act upon the physical world, such as humanoid robots or androids.
CAVs equipped with AI software are examples of AIP agents. Passengers could be targeted by perverted intelligent CAVs without passengers, insidious members of a swarm that would then face a “singularity scenario”. So far, it has been impossible to provide a description of how pernicious intelligent CAVs would communicate and agree on some destructive cyberattack, secretly, i.e., without onboard systems of honest CAVs being able to notice. Via a secret platform, using some secret language? Unclear. That line of reasoning is by no means an impossibility proof. More work is needed from scientists, who bear and share the responsibility of clarifying the issue.
7.12 Landmark localization for Autonomous Vehicles
Participants: Noël Nadal, Fawzi Nashashibi, Jean-Marc Lasgouttes.
This study introduces a new approach for real-time global positioning of vehicles, leveraging coarse landmark maps with Gaussian position uncertainty. The proposed method addresses the challenge of precise positioning in complex urban environments, where global navigation satellite system (GNSS) signals alone do not provide sufficient accuracy. Our approach is to achieve a fusion of Gaussian estimates of the vehicle's current position and orientation, based on observations of the vehicle, and information from the landmark maps. It exploits the Gaussian nature of our data to achieve robust, reliable and efficient positioning, despite the fact that our knowledge of the landmarks may be imprecise and their distribution on the map uneven. It does not rely on any particular type of sensor or vehicle. We have evaluated our method through our custom simulator and verified its effectiveness in obtaining good real-time positional accuracy of the vehicle, even on a large scale.
This work is described in a paper that will be presented at the VEHITS 2025 conference 27.
7.13 Asymptotics and Time-Scaling of Stop-and-Go Waves in Car-Following Models
Participants: Guy Fayolle, Jean-Marc Lasgouttes.
Waves, known as stop-and-go waves or phantom jams, can appear spontaneously in dense traffic. This causes a situation where drivers are faced with consecutive phases of acceleration and braking. Although waves are well understood in the setting of macroscopic models, the results for car-following models are not so numerous. Assuming string instability, G. Fayolle and J.-M. Lagouttes obtain asymptotic estimates of the speed and shape of these waves. It relies on the well-known saddle-point method in order to describe the trajectory of a vehicle caught in such a wave 35.
7.14 Thermodynamical limits for models of car-sharing systems: the Autolib' example
Participants: Guy Fayolle.
Ch. Fricker (Inria-Paris) and G. Fayolle analyze various mean-field equations obtained for models involving a large station-based car-sharing system in France called Autolib'. The focus is mainly on a version without capacity constraints, where users reserve a parking space when they take a car. The model is carried out in thermodynamical limit, that is when the number
7.15 A Markovian Analysis of IEEE 802.11 Broadcast Transmission Networks with Buffering and back-off stages
Participants: Guy Fayolle.
G. Fayolle and P. Mühlethaler (Inria-Paris) analyze the so-called back-off technique of the IEEE 802.11 protocol in broadcast mode with waiting queues. In contrast to existing models, packets arriving when a station (or node) is in back-off state are not discarded, but are stored in a buffer of infinite capacity. The key point of the analysis hinges on the assumption that the time on the channel is viewed as a random succession of transmission slots (whose duration corresponds to the length of a packet) and mini-slots during which the back-off of the station is decremented. These events occur independently, with given probabilities. The state of a node is represented by a three-dimensional Markov chain in discrete-time, formed by the back-off counter, the number of packets at the station, and the back-off stage. The stationary behaviour can be explicitly solved. In particular, stability (ergodicity) conditions are obtained and interpreted in terms of maximum throughput, see article 18.
8 Bilateral contracts and grants with industry
8.1 Bilateral contracts with industry
Participants: Fawzi Nashashibi, Raoul de Charette, Jean-Marc Lasgouttes, Benazouz Bradai, Paulo Resende, Zayed Alsayed, Fernando Garrido, Axel Jeanne, Nelson de Moura, Noel Nadal, Mohammad Fahes, Karim Essalmi, Tetiana Martyniuk.
Valeo Group: As a result of a long-standing collaboration, the strategic partnership between INRA and VALEO led to the establishment of a joint project team in 2022. Since that date, several bilateral contracts were signed to conduct joint some of which are funded by Valeo.
- Several CIFRE theses have been developed throughout the year 2023 between Valeo and Inria : Mr. Karim ESSALMI joined ASTRA in February 2023 as a new PhD student working on Maneuver decision and Motion planning. Mrs Tetiana MARTYNIUK joined the team in June 2023 on a pre-thesis contract with a CIFRE that will start in 2024 and is working on conditioned generation of egocentric 3D driving scenes within the astra vision team.
- Other PhD students and post-docs are jointly funded by Valeo and Inria while Mr. Nelson de Moura is hired as a 2-years post-doc thanks to the national Plan de relance Programme.
- Valeo is currently a major financing partner of the “GAT” international Chaire/JointLab in which Inria is a partner. The other partners are: UC Berkeley, Shanghai Jiao-Tong University, EPFL, IFSTTAR, Stellantis and SAFRAN.
- Technology transfer is also a major collaboration topic between ASTRA and Valeo as well as the development of a road automated prototype.
- Finally, Inria and Valeo are partners of the French project SAMBA (Sécurité Active et MoBilités Autonomes) including SAFRAN Group, Inria Paris, TwinswHeel, Soben, Stanley Robotics and EXPLEO.
The work with Valeo Group is articulated around the collaboration of two Valeo teams:
Valeo DAR works on research and development for Advanced Driving Assistant Systems (ADAS). Starting from July 2022, Zayed Alsayed, Axel Jeanne, Fernando Garrido, and Paulo Resende, employees seconded by Valeo, joined the joint project team to work on the following scientific areas: localization and mapping (Sec. 3.2), decision making, motion planning & vehicle control (Sec. 3.3), and large-scale modeling and deployment of mobility systems in smart cities (Sec. 3.4).
Valeo.AI is the research laboratory of Valeo Group, and follows an academic research line. Valeo.AI collaborates with the vision group (Sec. 3.1). Starting from July 2022, Alexandre Boulch, Andrei Bursuc, Gilles Puy, Patrick Pérez, Renaud Marlet, Tuan-Hung Vu joined as part-time researchers in Astra, with frequent joint group readings, workshops and seminars. Subsequently to his departure from Valeo, in Dec. 2023 Patrick Pérez also left Astra. In 2024 the collaboration led to open source realizations, top-tier publications and the co-supervision of 3 internship and 2 PhDs.
9 Partnerships and cooperations
Participants: Fawzi Nashashibi, Raoul de Charette.
9.1 International research visitors
9.1.1 ANR
SIGHT
- Title: viSIon throuGH weaTher
- Instrument: ANR JCJC
- Duration: January 2021- June 2025
- Coordinator: Raoul de Charette
- Partners: Inria Paris, Université Laval, Mines ParisTech
- Inria contact: Raoul de Charette
- Abstract: SIGHT investigates invariant algorithms for complex weather conditions (rain, snow, hail). The project leverages un-/self-supervised algorithms with physic-guidance to model physically realistic weather, and learn weather-invariant features to improve vision algorithms.
9.1.2 AMI – EquipEx+
TIRREX
- Inria is a major partner and beneficiary of the new EquipEx+ national initiative TIRREX (Infrastructure technologique pour la recherche d'excellence en robotique). ASTRA is an active participant of the “Autonomous Land Robotics” axis.
- Project start: Dec. 18, 2021
- Kick-off: Jan. 14, 2022
9.1.3 Competitivity Clusters
NextMove
(prev. MOV'EO): we are particularly involved in several technical committees like the DAS SMI (Systèmes de Mobilité Intelligents), for example.
Vedecom
(IEED): main Inria contributor and active participant to the CD2 domain dedicated to automated driving.
SystemX Institute
: close partnership, with the jointly supervised PhD thesis of Jiahao Zhang.
10 Dissemination
Participants: Zayed Alsayed, Andrei Bursuc, Raoul de Charette, Guy Fayolle, Fernando Garrido, Jean-Marc Lasgouttes, Renaud Marlet, Noël Nadal, Fawzi Nashashibi, Tiago Rocha Gonçalves, Tuan-Hung Vu, Itheri Yahiaoui.
10.1 Promoting scientific activities
10.1.1 Scientific events: organisation
- Raoul de Charette co-founded and was general chair of the African Computer Vision Summer School (ACVSS), a 10-day immersive event fully funded. The inaugural edition occured in Nairobi, Kenya in July 2024 with 30 participants and 17 renowned speakers.
- Raoul de Charette, Tuan-Hung Vu, Andrei Bursuc organized the 3rd Weakly Supervised Computer Vision (WSCV) workshop in Dakar, Senegal in Septembre 2024.
Member of the organizing committees
- Fawzi Nashashibi is member of the organizing technical committee of The Thirteenth International Conference on Smart Cities, Systems, Devices and Technologies (SMART 2024), April 14-18, 2024 - Venice, Italy
Chair of conference program committees
- Raoul de Charette was part of the Program Chair of Deep Learning Indaba 2024.
Member of the conference program committees
- Fawzi Nashashibi was member of program committee of SMART 2024 (IARIA) : The Thirteenth International Conference on Smart Cities, Systems, Devices and Technologies (SMART 2024), April 14-18, 2024 - Venice, Italy
- Fawzi Nashashibi was member of program committee of ICCP 2024 : 2024 IEEE 20th International Conference on Intelligent Computer Communication and Processing, October 17-19, 2024, Cluj-Napoca, Romania.
- Fawzi Nashashibi was member of program committee of VEHITS 2024: 10th International Conference on Vehicle Technology and Intelligent Transport Systems, VEHITS 2024, Angers, France, May 2-4, 2024.
- Raoul de Charette was area chair for ECCV 2024, CVPR 2025, WACV 2025 and associate editor for IV 2025.
Reviewer
- Jean-Marc Lasgouttes was reviewer for² IEEE IV.
- Fawzi Nashashibi was reviewer for Transportation Research Board 2024, IEEE ITSC, IEEE/RSJ IROS, IEEE VEHITS, IEEE IV
10.1.2 Journal
Member of the editorial boards
- Guy Fayolle is associate editor of the journal Markov Processes and Related Fields (MPRF).
- Fawzi Nashashibi is Senior Editor of the journals IEEE Transactions on Intelligent Vehicles (T-IV)
- Fawzi Nashashibi is Senior Editor of the IEEE Sensors journal
- Fawzi Nashashibi is Associate Editor of the IEEE Transactions on Intelligent Transportation Systems (T-ITS)
- Fawzi Nashashibi is Associate Editor of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Reviewer - reviewing activities
- Guy Fayolle reviewed several papers and books submitted for publication in some majors journals, e.g. Transactions of the American Mathematical Society, Markov Processes and Related Fields, Journal of Statistical Physics, Physica A, etc.
- Fawzi Nashashibi reviewed papers for IEEE Transactions on ITS, IEEE Transactions on IV, Journal of Traffic and Transportation Engineering, Sensors, Engineering Applications of Artificial Intelligence, IEEE Robotics and Automation Letters
10.1.3 Invited talks
- Raoul de Charette gave a talk on “Scene understanding on the shoulders of foundational models” at the L3DIVU workshop at CVPR 2024, June 18th 2024, Seattle, USA.
- Raoul de Charette gave a talk on “3D scene reconstruction” at ACVSS 2024, July 24th 2024, Nairobi, Kenya.
- Fawzi Nashashibi gave a keynote on “Decision Making for Autonomous Vehicles” at the IEEE ICCP 2024 conference (), (Cluj – October 18, 2024)
10.1.4 Scientific expertise
- Raoul de Charette reviewed project for the ANR AAPG.
- Guy Fayolle is scientific advisor and associate researcher at the Centre for Robotics of Mines ParisTech.
- Guy Fayolle is a member of the working group IFIP WG 7.3.
- Fawzi Nashashibi is member and representative of the french academics at Vedecom’s Working Group on Vehicle Automation.
- Fawzi Nashashibi is member of the SMIS working group of NextMove cluster
10.1.5 Research administration
- Jean-Marc Lasgouttes is member of the Conseil d'administration of Inria.
- Jean-Marc Lasgouttes is member of the Comité Social d'Administration of Inria.
- Jean-Marc Lasgouttes is member of the Formation spécialisée en matière de santé, sécurité et conditions de travail of Inria.
- Jean-Marc Lasgouttes is member of the Formation spécialisée de site en matière de santé, sécurité et conditions de travail of Inria Paris.
- Raoul de Charette is member of the Comité d'Evaluation Scientifique of Inria Paris.
10.2 Teaching - Supervision - Juries
10.2.1 Teaching
- Seminar: Fernando Garrido, Paulo Resende, “decision-making and planning for automated driving”, 16 hours, Valeo Créteil, France.
- Engineering: Fernando Garrido, Paulo Resende, “decision-making and planning for automated driving”, 24 hours, École d'ingénieurs ESME Sudria, France.
- Engineering: Fernando Garrido, Paulo Resende, “decision-making and planning for autoamted driving”, 24 hours, Institut Supérieur de l'Automobile et des Transports (ISAT) à Nevers, France.
- Mastère: Jean-Marc Lasgouttes, “Introduction au Boosting”, 10.5h, Mastère Spécialisé Expert en sciences des données, INSA Rouen Normandie, France.
- Engineering, 1st year: Jean-Marc Lasgouttes, “Analyse de données”, 48h, L3, INSA Rouen Normandie, France.
- Engineering, 2nd year: Fawzi Nashashibi, “Image synthesis and 3D Infographics”, 12h, M2, INT Télécom SudParis, IMA4503 “Virtual and augmented reality for autonomy”.
- Master: Fawzi Nashashibi, “Perception and Image processing for Mobile Autonomous Systems”, 12h, M2, University of Evry
- Engineering, 2nd year: Fawzi Nashashibi, “Image synthesis and 3D Infographics”, 12h, M2, INT Télécom SudParis, IMA4503 “Virtual and augmented reality for autonomy”
- Licence, 2nd year: Noël Nadal, “C avancé”, 10.5h, Sorbonne Université, France.
- Licence, 2nd year: Noël Nadal, “Programmation fonctionnelle”, 10.5h, Sorbonne Université, France.
- Licence, 2nd year: Noël Nadal, “Mathématiques discrètes”, 10.5h, Sorbonne Université, France.
- Engineering, 2nd year: Tiago Rocha Gonçalves, “Véhicule intelligent et communicant,”, 6h (TP), CentraleSupélec, France.
10.2.2 Supervision
- PhD: Ahn Quan Cao, PSL Research University, “Unsupervised 3D scene understanding from image(s)”, December 2024, supervisor: Raoul de Charette.
- PhD: Jiahao Zhang, “Misbehavior detection for collective perception in Intelligent Transportation System”, December 2024, UPMC Sorbonne University, supervisor Fawzi Nashashibi, co-supervisor: Ines Ben Jemaa.
- PhD in progress: Karim Essalmi, “Maneuver Planner based on the Conservation of Resources Theory and Quantum Game Theory”, march 2023, supervisor Fawzi Nashashibi, co-supervisor: Fernando Garrido Carpio
- PhD in progress: Mohammad Fahes, Mines-ParisTech, “Crowdsourced Unsupervised Learning in Adverse Conditions”, October 2022, supervisor: Raoul de Charette, co-supervisors: Tuan-Hung Vu, Andrei Bursuc, Patrick Pérez.
- PhD in progress: Amina Ghoul, UPMC Sorbonne University, “Trajectory prediction in an urban environment”, May 2021, supervisor Fawzi Nashashibi, co-supervisors: Anne Verroust-Blondet, Itheri Yahiaoui.
- PhD in progress: Islem Kobbi: UPMC Sorbonne University, “RL-based Decision-Making and Planning for Automated Driving”, October 2024, supervisor Fawzi Nashashibi.
- PhD in progress: Ivan Lopes, PSL Research University, “Physic-guided learning for vision in adverse weather conditions”, November 2021, supervisor: Raoul de Charette.
- PhD in progress: Elias Maharmeh, “Integrity and Robustness of Algorithms for Localization and Mapping in Autonomous Driving”, March 2024, UPMC Sorbonne University, co-supervisors Fawzi Nashashibi et Zayed Alsayed
- PhD in progress: Tetiana Martyniuk, PSL Research University, “Conditioned generation of egocentric 3D driving scenes”, December 2023, supervisor: Raoul de Charette, co-supervisors: Renaud Marlet.
- PhD in progress: Noël Nadal, “Cartographie et localisation crowdsourcées pour la conduite autonome en environnement urbain”, October 2022, co-supervisors: Fawzi Nashashibi and Jean-Marc Lasgouttes.
10.2.3 Juries
- Fawzi Nashashibi was a Jury member for the recruitment of a senior researcher in mobile robotics at Mines Paris, May 2024.
- Fawzi Nashashibi was reviewer of the HdR Committee of Mr. Abderrahmane BOUBEZOUL (Université Paris Saclay), “Les Avancées de l'Apprentissage Automatique dans l'Interaction Homme-Véhicule pour une Mobilité Durable et Sûre”, Saclay
- Fawzi Nashashibi was reviewer of the PhD thesis of Mr. Thomas GENEVOIS (Université Grenoble Alpes), “Dynamic Occupancy Grid based Collision Avoidance in Robotics”, Grenoble – June 19, 2024
- Fawzi Nashashibi was examiner of the PhD thesis of Mr. Raphaël CHEKROUN (Mines Paris), “Integrating Expert Knowledge with Deep Reinforcement Learning Methods for Autonomous Driving”, Paris – April 02, 2024
- Fawzi Nashashibi was reviewer of the PhD thesis of Mr. Mohamad ALBILANI (Telecom SudParis, IP PARIS), “Neuro-symbolic Deep Reinforcement Learning For Safe Urban Driving Using Low-Cost Sensors”, Palaiseau – April 11, 2024
- Fawzi Nashashibi was reviewer of the PhD thesis of Mr. Benoit VIGNE (Université Haute Alsace), “Génération de trajectoires locales temps réel pour un véhicule autonome dans un environnement dynamique et coopératif”, Mulhouse – December 11, 2024
- Fawzi Nashashibi participated as external expert to 6 CSI committees (thesis monitoring committee): Mr. Felix MARROCIA (Inria Paris), Mr. Fabian GRAF (Inria Paris), Mr. Adrien LAFAGE (ENSTA Paris), Mrs. Yuxuan SONG (Inria Paris), Mrs. Maria RUCHIGA (Université Gustave Eiffel / CEREMA Toulouse) and Mr. Manuel DIAZ-ZAPATA (Inria Grenoble).
- Raoul de Charette was reviewer of the HDR of: Vicky Kalogeiton (Polytechnique, December 2024)
- Raoul de Charette was reviewer of the PhD theses of: Adriano Cardace (University of Bologna, March 2024), Sandra Kara (CEA, December 2024).
- Raoul de Charette was in the CSI committees of: Amandine Brunetto (Mines), Gabriel Fiastre (Inria), Oumayma Bounou (Inria), Quan Nguyen (CEA).
- Jean-Marc Lasgouttes was in the CSI committees of Ismail Hawila (Inria) and Elsa Lopez Perez (Inria).
10.3 Popularization
10.3.1 Others science outreach relevant activities
- Fawzi Nashashibi gave an invited talk on “Smart Autonomous Mobility Technology & Future trends” at the Spanish ACCIONA Group Forum Meeting (Madrid – June 25, 2024)
11 Scientific production
11.1 Major publications
- 1 inproceedings2D SLAM Correction Prediction in Large Scale Urban Environments.ICRA 2018 - International Conference on Robotics and Automation 2018Brisbane, AustraliaMay 2018HAL
- 2 inproceedingsMonoScene: Monocular 3D Semantic Scene Completion.Conference on Computer Vision and Pattern Recognition (CVPR)New orleans, USA, United StatesJune 2022HALback to text
- 3 inproceedingsPØDA: Prompt-driven Zero-shot Domain Adaptation.International Conference on Computer Vision (ICCV)Paris, FranceOctober 2023HALback to textback to text
- 4 bookRandom Walks in the Quarter Plane: Algebraic Methods, Boundary Value Problems, Applications to Queueing Systems and Analytic Combinatorics.40Probability Theory and Stochastic ModellingSpringer International PublishingFebruary 2017, 255HALDOI
- 5 articleShort-term Forecasting of Urban Traffic using Spatio-Temporal Markov Field.IEEE Transactions on Intelligent Transportation Systems2382022, 10858-10867HALDOIback to text
- 6 articleA Review of Motion Planning Techniques for Automated Vehicles.IEEE Transactions on Intelligent Transportation SystemsApril 2016HALDOI
- 7 inproceedingsPhysics-Based Rendering for Improving Robustness to Rain.ICCV 2019 - International Conference on Computer VisionSeoul, South KoreaOctober 2019HALback to text
- 8 articleCross-Modal Learning for Domain Adaptation in 3D Semantic Segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence452March 2022, 1533-1544HALDOIback to textback to text
- 9 reportCyberphysical Constructs and Concepts for Fully Automated Networked Vehicles.RR-9297INRIA Paris-RocquencourtOctober 2019HAL
- 10 articleAdvances in Youla-Kucera parametrization: A Review.Annual Reviews in ControlJune 2020HALDOIback to text
- 11 articleAttention Based Vehicle Trajectory Prediction.IEEE Transactions on Intelligent Vehicles612021, 175-185HALDOIback to text
- 12 inproceedingsNon-local Social Pooling for Vehicle Trajectory Prediction.Intelligent Vehicles Symposium (IV)Paris, FranceJune 2019HALDOIback to text
- 13 inproceedingsCoMoGAN: continuous model-guided image-to-image translation.CVPR 2021 - IEEE Conference on Computer Vision and Pattern RecognitionIEEE Conference on Computer Vision and Pattern RecognitionOnline, FranceJune 2021HALback to textback to textback to text
- 14 inproceedingsModel-based occlusion disentanglement for image-to-image translation.ECCV 2020 - European Conference on Computer VisionECCV 2020Glasgow / Virtual, United KingdomAugust 2020HALback to textback to text
- 15 article3D Semantic Scene Completion: a Survey.International Journal of Computer Vision2021HALDOIback to textback to text
- 16 articleRain Rendering for Evaluating and Improving Robustness to Bad Weather.International Journal of Computer VisionSeptember 2020HALDOIback to textback to textback to text
11.2 Publications of the year
International journals
- 17 articleThermodynamical limits for models of car-sharing systems: the Autolib' example.Markov Processes And Related Fields2025, 17In press. HALback to text
- 18 articleA Markovian Analysis of an IEEE-802.11 Station with Buffering.Markov Processes And Related Fields2952023, 709-726HALDOIback to text
International peer-reviewed conferences
- 19 inproceedingsPaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness.Computer Vision and Pattern Recognition Conference (CVPR)Seattle, United StatesJune 2024HALback to textback to text
- 20 inproceedingsLatteCLIP: Unsupervised CLIP Fine-Tuning via LMM-Synthetic Texts.WACV 2025 - Winter Conference on Applications of Computer VisionTucson (Arizona ), United StatesOctober 2024HALback to text
- 21 inproceedingsCOR-MP: Conservation of Resources Model for Maneuver Planning.ICCP 2024 - IEEE 20th International Conference on Intelligent Computer Communication and ProcessingCluj-Napoca, RomaniaOctober 2024HALback to textback to text
- 22 inproceedingsA Simple Recipe for Language-guided Domain Generalized Segmentation.Computer Vision and Pattern Recognition Conference (CVPR)Seattle (USA), United StatesJune 2024HALback to text
- 23 inproceedingsMaterial Transforms from Disentangled NeRF Representations.EuroGraphics 2025 - 46th Annual Conference of the European Association for Computer GraphicsLondon, United KingdomNovember 2024HALback to textback to text
- 24 inproceedingsMaterial Palette: Extraction of Materials from a Single Image.Computer Vision and Pattern Recognition Conference (CVPR)Seattle, United StatesJune 2024HALback to text
- 25 inproceedingsFast maneuver recovery from aerial observation: trajectory clustering and outliers rejection.2024 IEEE Intelligent Vehicles Symposium (IV)Jeju, South KoreaJune 2024HALback to text
- 26 inproceedingsImproving behavior profile discovery for vehicles.IROS 2024Abu Dhabi, United Arab EmiratesOctober 2024HALback to text
- 27 inproceedingsLandmark-based Geopositioning with Imprecise Map.Proceedings of the 11th International Conference on Vehicle Technology and Intelligent Transport Systems, VEHITS 2025VEHITS 2025 - International Conference on Vehicle Technology and Intelligent Transport SystemsPorto, PortugalSciTePressApril 2025HALback to text
- 28 inproceedingsThree Pillars improving Vision Foundation Model Distillation for Lidar.CVPR 2024 - IEEE/CVF Conference on Computer Vision and Pattern RecognitionSeattle, United StatesarXivFebruary 2024HALDOIback to text
- 29 inproceedingsUMBRAE: Unified Multimodal Brain Decoding.ECCV 2024 - European Conference on Computer VisionMilan, ItalySeptember 2024HALback to text
- 30 inproceedingsSimulation Framework of Misbehavior Detection and Mitigation for Collective Perception Services.35th IEEE Intelligent Vehicles Symposium (IV 2024)Jeju Island, South KoreaJune 2024HALback to text
- 31 inproceedingsOn Enhancing Intersection Applications With Misbehavior Detection and Mitigation.IEEE 100th Vehicular Technology Conference (VTC Fall)Washington, DC, United StatesOctober 2024HALback to text
Doctoral dissertations and habilitation theses
- 32 thesisLearning Semantics and Geometry for Scene Understanding.Université Paris sciences et lettresDecember 2024HALback to text
Reports & preprints
- 33 miscDomain Adaptation with a Single Vision-Language Embedding.October 2024HALback to text
- 34 reportFine-Tuning CLIP's Last Visual Projector: A Few-Shot Cornucopia.InriaDecember 2024HALback to text
- 35 reportAsymptotics and Time-Scaling of Stop-and-Go Waves in Car-Following Models.INRIA Paris, Équipe ASTRADecember 2024, 20HALback to text
- 36 miscMatSwap: Light-aware material transfers in images.February 2025HALback to text
11.3 Cited publications
- 37 inproceedingsDeep evidential regression.Advances in Neural Information Processing Systems (NeurIPS)2020back to text
- 38 miscSelf-Driving like a Human driver instead of a Robocar: Personalized comfortable driving experience for autonomous vehicles.2020back to text
- 39 articleIntegrity Monitoring of Multimodal Perception System for Vehicle Localization.Sensors2020back to text
- 40 phdthesisPlanification de trajectoire dans un environnement peu contraint et fortement dynamique.Sorbonne Université2019back to text
- 41 inproceedingsRIS: A Framework for Motion Planning Among Highly Dynamic Obstacles.International Conference on Control, Automation, Robotics and Vision (ICARCV)2018back to text
- 42 inproceedingsSemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences.International Conference~on Computer Vision (ICCV)2019back to textback to text
- 43 inproceedingsThis dataset does not exist: training models from generated images.International Conference on Acoustics, Speech and Signal Processing (ICASSP)2020back to textback to text
- 44 inproceedingsSeeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather.Conference on Computer Vision and Pattern Recognition (CVPR)2020back to textback to text
- 45 articleConvPoint: Continuous convolutions for point cloud processing.Computers & Graphics2020back to textback to text
- 46 inproceedingsFKAConv: Feature-Kernel Alignment for Point Cloud Convolution.Asian Conference on Computer Vision (ACCV)2020back to textback to text
- 47 articleA Taxonomy and Review of Algorithms for Modeling and Predicting Human Driver Behavior.CoRR2020back to text
- 48 articleBUDA: Boundless Unsupervised Domain Adaptation in Semantic Segmentation.arXiv preprint arXiv:2004.011302020back to text
- 49 inproceedingsPLOP: Probabilistic poLynomial Objects trajectory Planning for autonomous driving.Conference on Robot Learning (CoRL)2020back to text
- 50 miscConditional Vehicle Trajectories Prediction in CARLA Urban Environment.2019back to text
- 51 inproceedingsnuScenes: A Multimodal Dataset for Autonomous Driving.Conference on Computer Vision and Pattern Recognition (CVPR)2020back to textback to textback to text
- 52 articlePCAM: Product of Cross-Attention Matrices for Rigid Registration of Point Clouds.Submitted for publication2021back to textback to text
- 53 inproceedingsArgoverse: 3D Tracking and Forecasting with Rich Maps.Conference on Computer Vision and Pattern Recognition (CVPR)2019back to text
- 54 articleMind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation.arXiv preprint arXiv:2003.037872020back to text
- 55 articleDeepDriving: Learning Affordance for Direct Perception in Autonomous Driving.CoRR2015back to text
- 56 inproceedingsStargan v2: Diverse image synthesis for multiple domains.Conference on Computer Vision and Pattern Recognition (CVPR)2020back to text
- 57 articlePioneering Driverless Electric Vehicles in Europe: The City Automated Transport System (CATS).Transportation Research Procedia13Towards future innovative transport: visions, trends and methods, 43rd European Transport Conference Selected Proceedings2016, 30--39URL: http://www.sciencedirect.com/science/article/pii/S2352146516300047DOIback to text
- 58 inproceedingsA path planner for autonomous driving on highways using a human mimicry approach with Binary Decision Diagrams.European Control Conference (ECC)2015back to text
- 59 inproceedingsMulti-Criteria Decision Making for Autonomous Vehicles using Fuzzy Dempster-Shafer Reasoning.Intelligent Vehicles Symposium (IV)2018back to text
- 60 inproceedingsUs highway 101 dataset..Federal Highway Administration (FHWA), Tech. Rep. FHWA-HRT07-0302007back to text
- 61 inproceedingsUs highway i-80 dataset..Federal Highway Administration (FHWA), Tech. Rep. FHWA-HRT-06-1372006back to text
- 62 incollectionAddressing Failure Prediction by Learning Model Confidence.Advances in Neural Information Processing Systems (NeurIPS)Curran Associates, Inc.2019back to textback to text
- 63 inproceedingsStabilizing traffic flow via a single autonomous vehicle: possibilities and limitations.Intelligent Vehicles Symposium (IV)2017back to text
- 64 articleAsymptotics and scalings for large closed product-form networks via the Central Limit Theorem.Markov Processes and Related Fields221996, 317-348back to text
- 65 articleStability and string stability of car-following models with reaction-time delay.SIAM Journal on Applied Mathematics8252022, 1661-1679HALDOIback to text
- 66 phdthesisControl architecture for adaptive and cooperative car-following.Université Paris sciences et lettresDecember 2018HALback to text
- 67 inproceedingsUsing Fractional Calculus for Cooperative Car-Following Control.Intelligent Transportation Systems Conference 2016IEEERio de Janeiro, BrazilNovember 2016HALback to text
- 68 inproceedingsA Cooperative Advanced Driver Assistance System to mitigate vehicular traffic shock waves.INFOCOM - Conference on Computer Communications2014back to textback to text
- 69 articleEncoding the latent posterior of Bayesian Neural Networks for uncertainty quantification.arXiv preprint arXiv:2012.028182020back to text
- 70 inproceedingsTRADI: Tracking deep neural network weight distributions.European Conference on Computer Vision (ECCV)2020back to textback to text
- 71 inproceedingsSpatial and Temporal Analysis of Traffic States on Large Scale Networks.Intelligent Transportation Systems Conference (ITSC)2010back to text
- 72 inproceedingsA queueing theory approach for a multi-speed exclusion process..Traffic and Granular Flow '07Springer2009, 129-138URL: http://hal.archives-ouvertes.fr/hal-00175628/en/back to text
- 73 techreportSpatio-temporal Probabilistic Short-term Forecasting on Urban Networks.INRIA2018back to text
- 74 articleLearning Multiple Belief Propagation Fixed Points for Real Time Inference.Physica A: Statistical Mechanics and its Applications2010back to text
- 75 inproceedingsA belief propagation approach to traffic prediction using probe vehicles.Intelligent Transportation Systems Conference (ITSC)2007back to text
- 76 articleOne-dimensional Particle Processes with Acceleration/Braking Asymmetry.Journal of Statistical Physics1476June 2012, 1113-1144HALDOIback to text
- 77 inproceedingsThe Fundamental Diagram on the Ring Geometry for Particle Processes with Acceleration/Braking Asymmetry.TGF'11 - Traffic and Granular FlowMoscowDecember 2011, URL: http://hal.inria.fr/hal-00646988back to text
- 78 phdthesisTwo-staged local trajectory planning based on optimal pre-planned curves interpolation for human-like driving in urban areas.Université Paris sciences et lettres2018back to text
- 79 articleA Two-Stage Real-Time Path Planning : Application to the Overtaking Manuever.IEEE AccessJuly 2020HALDOIback to text
- 80 inproceedingsImplementable Ethics for Autonomous Vehicles.Autonomes Fahren: Technische, rechtliche und gesellschaftliche AspekteBerlin, HeidelbergSpringer Berlin Heidelberg2015, 87--102URL: https://doi.org/10.1007/978-3-662-45854-9_5DOIback to text
- 81 inproceedingsOn a weaker notion of ring stability for mixed traffic with human-driven and autonomous vehicles.Conference on Decision and Control (CDC)2019back to text
- 82 inproceedings3D Semantic Segmentation with Submanifold Sparse Convolutional Networks.Conference on Computer Vision and Pattern Recognition (CVPR)2018back to text
- 83 inproceedingsOn-Road Motion Planning for Autonomous Vehicles.Intelligent Robotics and Applications - International Conference, ICIRALecture Notes in Computer ScienceSpringer2012back to text
- 84 inproceedingsPlatoon Route Optimization for Picking up Automated Vehicles in an Urban Network.21st IEEE International Conference on Intelligent Transportation Systems2018 IEEE 21th International Conference on Intelligent Transportation Systems (ITSC)Maui, United StatesNovember 2018HALback to text
- 85 articleA game theory-based route planning approach for automated vehicle collection.Concurrency and Computation: Practice and ExperienceFebruary 2021HALDOIback to text
- 86 articleLocalization Integrity for Intelligent Vehicles Through Fault Detection and Position Error Characterization.Transactions on Intelligent Transportation Systems (T-ITS)2020back to text
- 87 articleAttgan: Facial attribute editing by only changing what you want.Transactions on Image Processing2019back to text
- 88 inproceedingsFailure prediction for autonomous driving.Intelligent Vehicles Symposium (IV)2018back to text
- 89 inproceedingsOn Ambiguities in Feature-Based Vehicle Localization and their A Priori Detection in Maps.Intelligent Vehicles Symposium (IV)2019back to text
- 90 articleRobust Tensor Recovery with Fiber Outliers for Traffic Events.Trans. Knowl. Discov. Data2021back to text
- 91 miscSAE Standards: J3016 automated-driving graphic..back to text
- 92 articleComputer vision for autonomous vehicles: Problems, datasets and state of the art.Foundations and Trends in Computer Graphics and Vision2020back to text
- 93 inproceedingsSparse and dense data with CNNs: Depth completion and semantic segmentation.International Conference on 3D Vision (3DV)2018back to text
- 94 inproceedingsxmuda: Cross-modal unsupervised domain adaptation for 3D semantic segmentation.Conference on Computer Vision and Pattern Recognition (CVPR)2020back to textback to textback to text
- 95 article3d gaussian splatting for real-time radiance field rendering..ACM Trans. Graph.4242023, 139--1back to text
- 96 articleSupervised contrastive learning.arXiv preprint arXiv:2004.113622020back to text
- 97 inproceedingsThe highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems.Intelligent Transportation Systems Conference (ITSC)2018back to text
- 98 inproceedingsMotion planning for urban driving using RRT.International Conference on Intelligent Robots and Systems (IROS)2008back to text
- 99 miscMoral Machine - Human Perspectives on Machine Ethics.back to text
- 100 inproceedingsSurface reconstruction from 3D line segments.International Conference on 3D Vision (3DV)2019back to text
- 101 inproceedingsGlobal On-line Optimization for Charging Station Allocation.Intelligent Transportation Systems Conference, ITSC 2015IEEE2015back to text
- 102 inproceedingsToward Efficient Simulation Platform for Platoon Communication in Large Scale C-ITS Scenarios.IEEE International Symposium on Networks, Computers and CommunicationsRoma, ItalyJune 2018HALback to text
- 103 inproceedingsAutomotive localization integrity using proprioceptive and pseudo-ranges measurements.Accurate Localization for Land TransportationLes Collections de l'INRETS2009back to text
- 104 inproceedingsVehicle Localization Integrity Based on Trajectory Monitoring.Intelligent Robots and Systems (IROS)2009back to text
- 105 inproceedingsSemantic Palette: Guiding Scene Generation with Class Proportions.Conference on Computer Vision and Pattern Recognition (CVPR)2021back to text
- 106 inproceedingsPseudo-label: The simple and efficient semi-supervised learning method for deep neural networks.Workshop on challenges in representation learning (ICML)2013back to text
- 107 articleA survey on motion prediction and risk assessment for intelligent vehicles. ROBOMECH Journal112014, URL: http://www.robomechjournal.com/content/1/1/1DOIback to text
- 108 phdthesisSafe and Efficient Reinforcement Learning for Behavioural Planning in Autonomous Driving.Université de Lille2020back to text
- 109 articlePointcnn: Convolution on x-transformed points.Advances in Neural Information Processing Systems (NeurIPS)2018back to text
- 110 inproceedingsAn Explicit Decision Tree Approach for Automated Driving.ASME 2017 Dynamic Systems and Control ConferenceDynamic Systems and Control Conference2017back to text
- 111 articleGame-Theoretic Modeling of Driver and Vehicle Interactions for Verification and Validation of Autonomous Vehicle Control Systems.CoRR2016back to text
- 112 articleVisual Measurement Integrity Monitoring for UAV Localization.CoRR2019back to text
- 113 articlePlanning Long Dynamically Feasible Maneuvers for Autonomous Vehicles.International Journal of Robotics Research2009back to text
- 114 inproceedingsOpen compound domain adaptation.Conference on Computer Vision and Pattern Recognition (CVPR)2020back to text
- 115 phdthesisYoula-Kucera based multi-objective controllers : Application to autonomous vehicles.Université Paris sciences et lettresDecember 2020HALback to text
- 116 inproceedingsPredictive uncertainty estimation via prior networks.Advances in Neural Information Processing Systems (NeurIPS)2018back to text
- 117 inproceedingsGMRF Estimation under Topological and Spectral Constraints.ECML PKDD Proceedings,Part II2014back to text
- 118 articleLatent binary MRF for online reconstruction of large scale systems.Annals of Mathematics and Artificial Intelligence2015back to text
- 119 phdthesisDeep Learning based Trajectory Prediction for Autonomous Vehicles.Sorbonne UniversitéJune 2021back to text
- 120 articleMulti-Head Attention with Joint Agent-Map Representation for Trajectory Prediction in Autonomous Driving.CoRR2020back to text
- 121 inproceedingsRelational Recurrent Neural Networks For Vehicle Trajectory Prediction.Intelligent Transportation Systems Conference (ITSC)IEEE2019back to text
- 122 articleGenerative Zero-Shot Learning for Classification and Semantic Segmentation of 3D Point Clouds.Submitted for publication2021back to text
- 123 inproceedingsEthical decision making for autonomous vehicles.IEEE Symposium on Intelligent VehicleLas Vegas (virtual), United StatesOctober 2020HALback to text
- 126 inproceedingsRule-Based Highway Maneuver Intention Recognition.International Conference on Intelligent Transportation Systems (ITSC)2015back to text
- 127 inproceedingsCan you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift.Advances in Neural Information Processing Systems (NeurIPS)2019back to text
- 128 articleModeling and Nonlinear Adaptive Control for Autonomous Vehicle Overtaking.IEEE Transactions on Intelligent Transportation Systems154August 2014, 14HALDOIback to text
- 129 inproceedingsSaturated Feedback Control for an Automated Parallel Parking Assist System.13th International Conference on Control, Automation, Robotics and Vision (ICARCV'14)Singapore, SingaporeDecember 2014HALback to text
- 130 articleCanadian adverse driving conditions dataset.International Journal of Robotics Research2020back to text
- 131 inproceedingsDomain bridge for unpaired image-to-image translation and unsupervised domain adaptation.Winter Conference on Applications of Computer Vision (WACV)2020back to textback to textback to text
- 132 articleManiFest: Manifold Deformation for Few-shot Image Translation.arXiv2021back to text
- 133 inproceedingsFLOT: Scene Flow on Point Clouds Guided by Optimal Transport.European Conference on Computer Vision (ECCV)2020back to textback to text
- 134 articleStyleLess layer: Improving robustness for real-world driving.arXiv preprint arXiv:2103.139052021back to textback to text
- 135 phdthesis3D Scene Reconstruction and Completion for Autonomous Driving.Sorbonne UniversitéJuly 2021back to text
- 136 inproceedingsLMSCNet: Lightweight Multiscale 3D Semantic Completion.International Conference on 3D Vision (3DV)2020back to textback to text
- 137 articleHuman motion trajectory prediction: a survey.The International Journal of Robotics Research2020back to text
- 138 articleACDC: The Adverse Conditions Dataset with Correspondences for Semantic Driving Scene Understanding.arXiv preprint arXiv:2104.133952021back to text
- 139 inproceedingsExactly Solvable Stochastic Processes for Traffic Modelling.25th International Symposium on Computer and Information Sciences - ISCIS 2010Erol Gelenbe, Ricardo Lent, Georgia SakellariLondres, Royaume-UniSeptember 2010, 75-78URL: http://hal.inria.fr/inria-00533154back to text
- 140 inproceedingsSemantic scene completion from a single depth image.Conference on Computer Vision and Pattern Recognition (CVPR)2017back to text
- 141 inproceedingsA Collaborative Framework for High-Definition Mapping.2019 IEEE Intelligent Transportation Systems Conference (ITSC)2019, 1845-1850DOIback to text
- 142 inproceedingsScalability in perception for autonomous driving: Waymo open dataset.Conference on Computer Vision and Pattern Recognition (CVPR)2020back to textback to text
- 143 inproceedingsKpconv: Flexible and deformable convolution for point clouds.International Conference on Computer Vision (ICCV)2019back to text
- 144 inproceedingsUnsupervised Domain Adaptation in Semantic Segmentation via Orthogonal and Clustered Embeddings.Winter Conference on Applications of Computer Vision (WACV)2021back to text
- 145 inproceedingsAdvent: Adversarial entropy minimization for domain adaptation in semantic segmentation.Conference on Computer Vision and Pattern Recognition (CVPR)2019back to text
- 146 inproceedingsDADA: Depth-Aware Domain Adaptation in Semantic Segmentation.International Conference on Computer Vision (ICCV)2019back to text
- 147 articleA Reinforcement Learning Based Approach for Automated Lane Change Maneuvers.CoRR2018back to text
- 148 inproceedingsDeep network interpolation for continuous imagery effect transition.Conference on Computer Vision and Pattern Recognition (CVPR)2019back to text
- 149 miscAssistance á la conduite d'un véhicule automobile.May 2020back to text
- 150 inproceedingsStabilizing Traffic with Autonomous Vehicles.International Conference on Robotics and Automation (ICRA)2018back to text
- 151 inproceedingsFew-shot object detection and viewpoint estimation for objects in the wild.European Conference on Computer Vision (ECCV)2020back to text
- 152 inproceedingsPose from shape: Deep pose estimation for arbitrary 3D objects.British Machine Vision Conference (BMVC)2019back to text
- 153 inproceedingsWoodscape: A multi-task, multi-camera fisheye dataset for autonomous driving.International Conference on Computer Vision (ICCV)2019back to text
- 154 inproceedingsUniversal domain adaptation.Conference on Computer Vision and Pattern Recognition (CVPR)2019back to text
- 155 inproceedingsBdd100k: A diverse driving video database with scalable annotation tooling.Conference on Computer Vision and Pattern Recognition (CVPR)2020back to text
- 156 articleINTERACTION Dataset: An INTERnational, Adversarial and Cooperative moTION Dataset in Interactive Driving Scenarios with Semantic Maps.arXiv:1910.03088 [cs, eess]2019back to text
- 157 articleTNT: Target-driveN Trajectory Prediction.CoRR2020back to text
- 158 articleMaking Bertha Drive—An Autonomous Journey on a Historic Route.Intelligent Transportation Systems Magazine (ITS Magazine)2014back to text