Most software-driven systems we commonly use in our daily life are
huge hierarchical assemblings of components. This observation runs
from the micro-scale (multi-core chips) to the macro-scale (data
centers), and from hardware systems (telecommunication networks) to
software systems (choreographies of web services). The main
characteristics of these pervasive applications are size,
complexity, heterogeneity, and modularity (or concurrency). Besides,
several such systems are actively used before they are fully
mastered, or they have grown so much that they now raise new
problems that are hardly manageable by human operators. While these
systems and applications are becoming more essential, or even
critical, the need for their reliability, efficiency
and manageability becomes a central concern in computer
science. The main objective of SUMO is to develop theoretical
tools to address such challenges, according to the following axes.

Several disciplines in computer science have of course addressed some of the issues raised by large systems. For example, formal methods (essentially for verification purposes), discrete-event systems (diagnosis, control, planning, and their distributed versions), but also concurrency theory (modelling and analysis of large concurrent systems). Practical needs have oriented these methods towards the introduction of quantitative aspects, such as time, probabilities, costs, and their combinations. This approach drastically changes the nature of questions that are raised. For example, verification questions become the reachability of a state in a limited time, the average sojourn duration in a state, the probability that a run of the system satisfies some property, the existence of control strategies with a given winning probability, etc. In this setting, exact computations are not always appropriate as they may end up with unaffordable complexities, or even with undecidability. Approximation strategies then offer a promising way around, and are certainly also a key to handling large systems. Approaches based on discrete-event systems follow the same trend towards quantitative models. For diagnosis aspects, one is interested in the most likely explanations to observed malfunctions, in the identification of the most informative tests to perform, or in the optimal placement of sensors. For control problems, one is of course interested in optimal control, in minimizing communications, in the robustness of the proposed controllers, in the online optimization of QoS (Quality of Service) indicators, etc.

While the above questions have already received partial answers, they remain largely unexplored in a distributed setting. We focus on structured systems, typically a network of dynamic systems with known interaction topology, the latter being either static or dynamic. Interactions can be synchronous or asynchronous. The state-space explosion raised by such systems has been addressed through two techniques. The first one consists in adopting true-concurrency models, which take advantage of the parallelism to reduce the size of the trajectory sets. The second one looks for modular or distributed "supervision" methods, taking the shape of a network of local supervisors, one per component. While these approaches are relatively well understood, their mixing with quantitative models remains a challenge (as an example, there exists no proper setting assembling concurrency theory with stochastic systems). This field is largely open both for modeling, analysis and verification purposes, and for distributed supervision techniques. The difficulties combine with the emergence of data-driven distributed systems (as web services or data centric systems), where the data exchanged by the various components influence both the behaviors of these components and the quantitative aspects of their reactions (e.g. QoS). Such systems call for symbolic or parametric approaches for which a theory is still missing.

Some existing distributed systems like telecommunication networks, data centers, or large-scale web applications have reached sizes and complexities that reveal new management problems. One can no longer assume that the model of the managed systems is static and fully known at any time and any scale. To scale up the management methods to such applications, one needs to be able to design reliable abstractions of parts of the systems, or to dynamically build a part of their model, following the needs of the management functions to realize. Besides, one does not wish to define management objectives at the scale of each single component, but rather to pilot these systems through high-level policies (maximizing throughput, minimizing energy consumption, etc). These distributed systems and management problems have connections with other approaches for the management of large structured stochastic systems, such as Bayesian networks (BN) and their variants. The similarity can actually be made more formal: inference techniques for BN rely on the concept of conditional independence, which has a counterpart for networks of dynamic systems and is at the core of techniques like distributed diagnosis, distributed optimal planning, or the synthesis of distributed controllers. The potential of this connection is largely unexplored, but it suggests that one could derive from it good approximate management methods for large distributed dynamic systems.

Since its creation in 2015, SUMO has successfully developed formal methods for large quantitative systems, in particular addressing verification, synthesis and control problems. Our current motivation is to expand this by putting emphasis on new concerns, such as algorithm efficiency, imprecision handling, and the more challenging objective of addressing incomplete or missing models. In the following we list a selection of detailed research goals, structured into four axes according to model classes: quantitative models, large systems, population models, and data-driven models. Some correspond to the pursuit of previously obtained results, others are more prospective.

The analysis and control of quantitative models will remain at the heart of a large part of our
research activities. In particular, we have two starting
collaborative projects focusing on timed models,
namely our ANR project TickTac and our collaboration with MERCE. The
main expected outcome of TickTac is an open-source tool implementing
the latest algorithms and allowing for quick prototyping of new
algorithms. Several other topics will be explored in these
collaborations, including robustness issues, game-theoretic problems, as well as the
development of efficient algorithms, e.g. based on CEGAR approach or
specifically designed for subclasses of automata (e.g. automata with
few clocks and/or having a specific structure, as
in 27).
Inspired by our collaboration with Alstom, we also aim at
developing symbolic techniques for analysing non-linear timed models.

Stochastic models are another important focus for our research. On the one hand, we want to pursue our work on the optimization of non-standard properties for Markov decision processes, beyond the traditional verification questions, and explore e.g. long-run probabilities, and quantiles. Also, we aim at lifting our work on decisiveness from purely stochastic 25, 26 to non-deterministic and stochastic models in order to provide approximation schemes for the probability of (repeated) reachability properties in infinite-state Markov decision processes.
On the other hand, in order to effectively handle large stochastic systems, we will
pursue our work on approximation techniques. We aim at deriving simpler models, enjoying or preserving specific properties, and at determining the appropriate level of abstraction for a given system. One needs of course to quantify the approximation degrees (distances), and to preserve essential features of the original systems (explainability). This is a connection point between formal methods and the booming learning methods.

Regarding diagnosis/opacity issues, we will explore further the quantitative aspects. For diagnosis, the theory needs extensions to the case of incomplete or erroneous models, and to reconfigurable systems, in order to develop its applicability (see Sec. 3.6). There is also a need for non-binary causality analysis (e.g. performance degradations in complex systems). For opacity, we aim at quantifying the effort attackers must produce vs how much of a secret they can guess. We also plan to synthesize robust controllers resisting to sensor failures/attacks.

Part of the background of SUMO is on the analysis and management of concurrent and modular/distributed systems, which we view as two main approaches to address state explosion problems. We will pursue the study of these models (including their quantitative features): verification of timed concurrent systems, robust distributed control of modular systems, resilient control to coalitions of attackers, distributed diagnosis, modular opacity analysis, distributed optimal planning, etc. Nevertheless, we have identified two new lines of effort, inspired by our application domains.

Reconfigurable systems. This is mostly motivated by applications at the convergence of virtualization techs with networking (Orange and Nokia PhDs). Software defined networks, either in the core (Network Function Virtualization, SDN/NFV) or at the edge (Internet of Things, IoT) involve distributed systems that change structure constantly, to adapt to traffic, failures, maintenance, upgrades, etc. Traditional verification, control, diagnosis approaches (to mention only those) assume static and known models that can be handled as a whole. This is clearly insufficient here: one needs to adapt existing results to models that (sometimes automatically)
change structure, incorporate new components/users or lose some, etc. At the same time, the programming paradigms for such systems (chaos monkey) incorporate resilience mechanisms, that should be considered by our models.

Hierarchical systems. Our experience with the regulation of subway lines (Alstom) revealed that large scale complex systems are usually described at a single level of granularity. Determining the appropriate granularity is a problem in itself. The control of such systems, with humans in the loop, can not be expressed at this single level, as tasks become too complex and require extremely skilled staff. It is rather desirable to describe models simultaneously at different levels of granularity, and to perform control at the appropriate level: humans in charge of managing the system by high level objectives, and computers in charge of implementing the appropriate micro-control sequences to achieve these tasks.

We want to step up our effort in parameterized verification of systems consisting of many identical components, so-called population models. In a nutshell our objectives summarize as "from Boolean to quantitative".

Inspired by our experience on the analysis of populations of yeasts, we aim at developping the quantitative analysis and control of population models, e.g. using Markov decision processes together with quantitative properties, and focusing on generating strategies with fast convergence.

As for broadcast networks, the challenge is to model the mobility of nodes (representing mobile ad hoc networks) in a faithful way. The obtained model should reflect on the one hand, the placement of nodes at a given time instant, and on the other hand, the physical movement of nodes over time. In this context, we will also use game theory techniques which allows one to study cooperative and conflictual behaviors of the nodes in the network, and to synthesize correct-by-design systems in adversarial environments.

As a new application area, we target randomized distributed algorithms. Our goal is to provide probabilistic variants of threshold automata 28 to represent fault-tolerant randomized distributed algorithms, designed for instance to solve the consensus problem. Most importantly, we then aim at developing new parameterized verification techniques, that will enable the automated verification of the correctness of such algorithms, as well as the assessment of their performances (in particular the expected time to termination).

In this axis, we will investigate whether fluid model checking and mean-field approximation techniques apply to our problems. More generally, we aim at a fruitful cross-fertilizing of these approaches with parameterized model-checking algorithms.

In this axis, we will consider data-centric models, and in particular their application to crowd-sourcing. Many data-centric models such as Business Artifacts 29 orchestrate simple calls and answers to tasks performed by a single user. In a crowd-sourcing context, tasks are realized by pools of users, which may result in imprecise, uncertain and (partially) incompatible information. We thus need mechanisms to reconcile and fuse the various contributions in order to produce reliable information. Another aspect to consider concerns answers of higher-order: how to allow users to return intentional answers, under the form of a sub-workflow (coordinated set of tasks) which execution will provide the intended value. In the framework of the ANR Headwork we will build on formalisms such as GAG (guarded attribute grammars) or variants of business artifacts to propose formalisms adapted to crowd-sourcing applications, and tools to analyze them. To address imprecision, we will study techniques to handle fuzziness in user answers, will explore means to set incentives (rewards) dynamically, and to set competence requirements to guide the execution of a complex workflow, in order to achieve an objective with a desired level of quality.

In collaboration with Open Agora, CESPA and University of Yaoundé (Cameroun) we intend to implement in the GAG formalism some elements of argumentation theory (argumentation schemes, speech acts and dialogic games) in order to build a tool for the conduct of a critical discussion and the collaborative construction of expertise. The tool would incorporate point of view extraction (using clustering mechanisms), amendment management and consensus building mechanisms.

We are concerned with one important lesson derived from our involvement in several application domains. Most of our background gets in force as soon as a perfect model of the system under study is available. Then verification, control, diagnosis, test, etc. can mobilize a solid background, or suggest new algorithmic problems to address. In numerous situations, however, assuming that a model is available is simply unrealistic. This is a major bottleneck for the impact of our research. We therefore intend to address this difficulty, in particular for the following domains.

The smart-city trend aims at optimizing all functions of future cities with the help of digital technologies. We focus on the segment of urban trains, which will evolve from static and scheduled offers to reactive and eventually on-demand transportation offers. We address two challenges in this field. The first one concerns the optimal design of robust subway lines. The idea is to be able to evaluate, at design time, the performance of time tables and of different regulation policies. In particular, we focus on robustness issues: how can small perturbations and incidents be accommodated by the system, how fast will return to normality occur, when does the system become unstable? The second challenge concerns the design of new robust regulation strategies to optimize delays, recovery times, and energy consumption at the scale of a full subway line. These problems involve large-scale discrete-event systems, with temporal and stochastic features, and translate into robustness assessment, stability analysis and joint numerical/combinatorial optimization problems on the trajectories of these systems.

Telecommunication-network management is a rich provider of research topics for the team, and some members of SUMO have a long background of contacts and transfer with industry in this domain. Networks are typical examples of large distributed dynamic systems, and their management raises numerous problems ranging from diagnosis (or root-cause analysis), to optimization, reconfiguration, provisioning, planning, verification, etc. They also bring new challenges to the community, for example on the modeling side: building or learning a network model is a complex task, specifically because these models should reflect features like the layering, the multi-resolution view of components, the description of both functions, protocols and configuration, and they should also reflect dynamically-changing architectures. Besides modeling, management algorithms are also challenged by features like the size of systems, the need to work on abstractions, on partial models, on open systems, etc. The networking technology is now evolving toward software-defined networks, virtualized-network functions, multi-tenant systems, etc., which reinforces the need for more automation in the management of such systems.

Data centers are another example of large-scale modular dynamic and reconfigurable systems: they are composed of thousands of servers, on which virtual machines are activated, migrated, resized, etc. Their management covers issues like troubleshooting, reconfiguration, optimal control, in a setting where failures are frequent and mitigated by the performance of the management plane. We have a solid background in the coordination of the various autonomic managers that supervise the different functions/layers of such systems (hardware, middleware, web services, ...) Virtualization technologies now reach the domain of networking, and telecommunication operators/vendors evolve towards providers of distributed open clouds. This convergence of IT and networking strongly calls for new management paradigms, which is an opportunity for the team.

A current trend is to involve end-users in collection and analysis of data. Exemples of this trend are contributive science, crisis-management systems, and crowd-sourcing applications. All these applications are data-centric and user-driven. They are often distributed and involve complex, and sometimes dynamic workflows. In many cases, there are strong interactions between data and control flows: indeed, decisons taken regarding the next tasks to be launched highly depend on collected data. For instance, in an epidemic-surveillance system, the aggregation of various reported disease cases may trigger alerts. Another example is crowd-sourcing applications where user skills are used to complete tasks that are better performed by humans than computers. In return, this requires addressing imprecise and sometimes unreliable answers. We address several issues related to complex workflows and data. We study declarative and dynamic models that can handle workflows, data, uncertainty, and competence management.

Once these models are mature enough, we plan to build prototypes to experiment them on real use cases from contributive science, health-management systems, and crowd-sourcing applications. We also plan to define abstraction schemes allowing formal reasonning on these systems.

The Sumo team participated to the Extended Stay Support Scheme (ESSS) set up by TCS4F (Theoretical Computer Science for Future) during the international event that took place in Kassel in summer 2022. The ESSS aimed at exploiting the presence distant researchers (from Asia, America, etc) at these conferences to invite them over for a longer stay in France or Europe. More generally, some members of the team individually take part to the TCS4F initiatives.

Since covid, the carbon footprint of travels related to our research activities decreased a lot. Some members of the team no longer fly to attend conferences, and carefully choose the conferences where they submit not only in terms of reputation, but also taking into account the location or the possibility to attend online.

Uli Fahrenberg received the Petri Nets 2023 Outstanding Paper Award

In 16 we present algorithms for model checking and controller synthesis of timed automata, seeing a timed automaton model as a parallel composition of a large finite-state machine and a relatively smaller timed automaton, and using compositional reasoning on this composition. We use automata learning algorithms to learn finite automata approximations of the timed automaton component, in order to reduce the problem at hand to finite-state model checking or to finite-state controller synthesis. We present an experimental evaluation of our approach.

Runtime Enforcement (RE) is a monitoring technique aimed at correcting possibly incorrect executions w.r.t. a set of formal requirements (properties) of a system. In this paper, we consider enforcement monitoring of real-time properties. Thus, executions are modelled as timed words and specifications as timed automata. Moreover, we consider that the enforcer has the ability to delay events by storing or buffering them into its internal memory (and releasing them when the property is finally satisfied) and suppressing events when no delaying is appropriate. Practically, in an implementation, the internal memory of the enforcer is finite. In 17, we propose a new RE paradigm for timed properties, where the memory of the enforcer is bounded/finite, to address practical applications with memory constraints and timed specifications. Bounding the memory presents a number of difficulties, e.g., how to accommodate a timed event into the memory when the memory is full, s.t., regardless of the course of action we choose to handle this situation, the behaviour of the bounded enforcer should not significantly differ from that of the unbounded enforcer. The problem of how to optimally discard events when the buffer is full is significantly more difficult in a timed environment where the progress of time affects the satisfaction or violation of a property. We define the bounded-memory RE problem for timed properties and develop a framework for regular timed properties specified as timed automata. The proposed framework is implemented in Python, and its performance is evaluated. From experiments, we discovered that the enforcer has a reasonable execution time overhead.

The patent 23 describes a method and system for correcting the operation of a target computer system by using timed requirements. In particular, we show how to modify real-time requirements in order to render them consistent.

In 14, we show how to efficiently solve energy Büchi problems in finite weighted automata and in one-clock weighted timed automata. Solving the former problem is our main contribution and is handled by a modified version of Bellman-Ford interleaved with Couvreur's algorithm. The latter problem is handled via a reduction to the former relying on the corner-point abstraction. All our algorithms are freely available and implemented in a tool based on the open-source platforms TChecker and Spot.

In time Petri nets (TPNs), time and control are tightly connected: time measurement for a transition starts only when all resources needed to fire it are available. Further, upper bounds on duration of enabledness can force transitions to fire (this is called urgency). For many systems, one wants to decouple control and time, i.e. start measuring time as soon as a part of the preset of a transition is filled, and fire it after some delay and when all needed resources are available. Our paper 10 considers an extension of TPN called waiting nets that dissociates time measurement and control. Their semantics allows time measurement to start with incomplete presets, and can ignore urgency when upper bounds of intervals are reached but all resources needed to fire are not yet available. Firing of a transition is then allowed as soon as missing resources are available. It is known that extending bounded TPNs with stopwatches leads to undecidability. Our extension is weaker, and we show how to compute a finite state class graph for bounded waiting nets, yielding decidability of reachability and coverability. We then compare expressiveness of waiting nets with that of other models w.r.t. timed language equivalence, and show that they are strictly more expressive than TPNs. In 22 we extend time Petri nets with continuous variables allowing for the description of objects with linear trajectories. The model is less expressive than hybrid systems, and it continuous part (time and variables) can be abstracted to a finite set of domains. For a subclass that is expressive enough to model the dynamics of metro networks, the verification of qantitative properties is decidable.

Temporal logics are extensively used for the specification of on-going behaviors of computer systems. Two significant developments in this area are the extension of traditional temporal logics with modalities that enable the specification of on-going strategic behaviors in multi-agent systems, and the transition of temporal logics to a quantitative setting, where different satisfaction values enable the specifier to formalize concepts such as certainty or quality. In the first class, SL (Strategy Logic) is one of the most natural and expressive logics describing strategic behaviors. In the second class, a notable logic is LTL[

The Connected Multi-Agent Path Finding (CMAPF) problem asks for a plan to move a group of agents in a graph while respecting a connectivity constraint. In 11, we study a generalization of CMAPF in which the graph is not entirely known in advance, but is discovered by the agents during their mission. We present a framework introducing this notion and study the problem of searching for a strategy to reach a configuration in this setting. We prove the problem to be PSPACE-complete when requiring all agents to be connected at all times, and NEXPTIME-hard in the decentralized case.

In 13, we prove the PSPACE-hardness of the problem when the underlying graph is a subgraph of a 3D grid and with range-based connectivity. Moreover, we provide an application of the WHCA

15 introduces MOCHY, a tool designed for the modeling of concurrent systems with variants of stochastic, timed and hybrid Petri nets. Beyond modeling, the tool serves as a platform for fast simulation, and can be used for statistical verification of properties, controller testing, and learning of control rules. The targeted models are variants of stochastic and timed nets where tokens can be continuous quantities depicting trajectories of moving objects. The architecture of the tool is designed to be as adaptive as possible, and allow the redefinition of objects behaviors or transitions firing through the refinement of a few semantic rules. The framework also allows for the integration of controllers. For any model variant, MOCHY can perform fast simulation, and perform statistical verification, evaluate some quantitative properties of a model, or learn control rules for reachability or quantitative objectives.

Distributed algorithms typically run over arbitrary many processes and may involve unboundedly many rounds, making the automated verification of their correctness challenging. In the following papers, we developed parameterized verification techniques to assess the correctness of particular types of distributed algorithms.

In 18 we present the tool PyLTA, which can model check parameterized distributed algorithms against LTL specifications. The parameters typically include the number of processes and a bound on faulty processes, and the considered algorithms are round-based and either synchronous or asynchronous.

In 19 we consider the verification of distributed systems composed of an arbitrary number of asynchronous processes. Processes are identical finite-state machines that communicate by reading from and writing to a shared memory. Beyond the standard model with finitely many registers, we tackle round-based shared-memory systems with fresh registers at each round. In the latter model, both the number of processes and the number of registers are unbounded, making verification particularly challenging. The properties studied are generic presence reachability objectives, which subsume classical questions such as safety or synchronization by expressing the presence or absence of processes in some states. In the more general round-based setting, we establish that the parameterized verification of presence reachability properties is PSPACE-complete. Moreover, for the roundless model with finitely many registers, we prove that the complexity drops down to NP-complete and we provide several natural restrictions that make the problem solvable in polynomial time.

Network congestion games are a simple model for reasoning about routing problems in a network. They are a very popular topic in algorithmic game theory, and a huge amount of results about existence and (in)efficiency of equilibrium strategy profiles in those games have been obtained over the last 20 years. In particular, the price of anarchy has been defined as an important notion for measuring the inefficiency of Nash equilibria. Generic bounds have been obtained for the price of anarchy over various classes of networks, but little attention has been put on the effective computation of this value for a given network. The invited talk 12 presents recent results on this problem in different settings.

Synchronizing a (deterministic, finite-state) automaton is the problem of finding a sequence of actions to be played in the automaton in order to end up in the same state independently of the starting state. In 8, we consider synchronization with LTL constraints on the executions leading to synchronization, extending the results of [Petra Wolf. Synchronization under dynamic constraints. FSTTCS'20] by showing that the problem is PSPACE-complete for LTL as well as for restricted fragments (involving only modality F or G), while it is NP-complete for constraints expressed using only modality X.

Several researchers of SUMO are involved in the joint research lab of Nokia Bell Labs France and Inria. We participate in the common
research team SAPIENS (Smart Automated and Programmable
Infrastructures for End-to-end Networks and Services), previously named “Softwarization of Everything.” This team involves several other Inria teams: Convecs, Diverse and Spades. SUMO focuses on the management of reconfigurable systems, both at the edge (IoT based
applications) and in the core (e.g. virtualized IMS
systems). In particular, we studied control and diagnosis issues for such systems.

Several researchers of SUMO are involved in a collaboration on the verification of real-time systems with the "Information and Network Systems" Team (INSv) led by David Mentré of the "Communication & Information Systems (CIS)" Division of MERCE Rennes). The members of the team at MERCE work on different aspects of formal verification. SUMO collaborates with Reiya Noguchi, a young engineer who was member of MERCE, on leave of a Japanese operational division of Mitsubishi and hosted by the SUMO team one day per week since during his stay in Rennes; Reiya returned in Japan in 2021 but we continue the collaboration with him and David Mentré in MERCE. This year we worked on the consistency of timed requirements and their fixing.

SUMO takes part in I/O Lab, the common lab of Orange Labs and Inria, dedicated to the design and management of Software Defined Networks. Our activities concern the diagnosis of malfunctions in virtualized multi-tenant networks.

Loïc Hélouët is involved in a long-term collaboration with Alstom Transport, precisely with the ATS division (Automatic Train Supervision), with the objective to evaluate and improve regulation policies of urban train systems. Currently, Antoine Thébault pursues a PhD thesis on this topic.

We are participating in the activities of the CNRS UMI ReLaX Research Lab in Computer Science, an Indo-French joint research unit dedicated to research in theoretical computer science, its applications and its interactions with mathematics. Within this setting, we are hosting undergraduate students as visitors or interns from Indian universities such as Chennai Mathematical Institute, and IIT Bombay.

We collaborate with Srinivas Pinisetty from Indian Institute of Technology Bhubaneswar on timed enforcement.

The aim of TickTac is to develop novel algorithms for the verification and synthesis of real-time systems using the timed automata formalism. One of the project's objectives is to develop an open-source and configurable model checker which will allow the community to compare algorithms. The algorithms and the tool will be used on a motion planning case study for robotics.

The objective of this project is to develop unified frameworks for quantitative verification of timed, hybrid, and stochastic systems. We believe such a unification is possible because common patterns are used in many cases. The project targets in particular:

The aim of MAVeriQ is to progress towards this unification, by gathering skills on timed and stochastic systems and on quantitative verification under a common roof, to jointly address open challenges in quantitative model-checking and quantitative validation. One such challenge we will address is robustness of quantitative models, that is, resilience to small perturbations, which is crucial for implementability. Unified methods developed in the project (such as robustness analysis and simulation techniques) will be showcased in different case studies in the domain of CPS (in particular automotive control),showing that such a system can be verified in different ways without leaving this framework.

When designing complex and critical systems (planes, autonomous vehicles, etc.), it is crucial to be able to give guarantees that the system works as intended, which is often done through comprehensive testing. The goal of project BisoUS is to provide stronger guarantees, based on mathematics, and to detect problems as early as possible: solving them is then easier and cheaper.

Unfortunately this is a hard problem because some design choices may not have been done yet, and some key features (e. g. speed of a CPU) are then not known precisely enough.

In project BisoUS we develop formal methods, based on model-checking and synthesis to work with expressive modelling formalisms encompassing parameters, cost/rewards, and games on graphs to meet those challenges.

The team collaborates with the following researchers:

Nathalie Bertrand was on the program committees of FoSSaCS 2023, NETYS 2023 and Highlights 2023.

Thierry Jéron was on the program committee of SAC-SVT 2023.

Loïc Hélouët was on the program committee of Petri Nets 2023.

Nicolas Markey was on the program committee of EUMAS'23, TIME'23, CSL'23.

Ocan Sankur was on the program committee of FORMATS 2023, and on the artifact evaluation committee of ESOP 2023.

All members of the team reviewed a number of papers for international conferences.

Nathalie Bertrand is an editorial board member for Journal of Logical and Algebraic Methods in Programming (JLAMP) and for Theoretical Computer Science (TCS).

All members of the team reviewed a number of papers for international journals.

Nicolas Markey gave an invited talk at Formats'23 12.

Nathalie Bertrand is the co-head of the French working group on verification of GDR-IM: GT Vérif.

Éric Fabre is the co-director of the joint research lab of Nokia Bell Labs and Inria, and member of the Evaluation Committee of Inria;

Nicolas Markey is the head of Department "Language and Software Engineering" of IRISA.

Although the team has no faculty members, most researchers do teach a significant number of hours, mostly at Masters level.

PhD in progress

Past master students

Current master students

Nathalie Bertrand was president of the joint hiring committee for three MCF positions at ISTIC (Université Rennes).

Nicolas Markey and Loïc Hélouët took part in the Chiche! Inria program.