In the increasingly networked world, reliability of applications becomes ever more critical as the number of users of, e.g., communication systems, web services, transportation etc., grows steadily. Management of networked systems, in a very general sense of the term, therefore is a crucial task, but also a difficult one.

MExICo strives to
take advantage of distribution by orchestrating cooperation between different agents that observe local subsystems,
and interact in a localized fashion.

The need for applying formal methods in the analysis and management of complex systems has long been recognized. It is with much less unanimity that the scientific community embraces methods based on asynchronous and distributed models. Centralized and sequential modeling still prevails.

However, we observe that crucial applications have increasing numbers of
users, that networks providing services grow fast both in the number of
participants and the physical size and degree of spatial distribution.
Moreover, traditional isolated and proprietary software
products for local systems are no longer typical for emerging applications.

In contrast to traditional centralized and sequential machinery for which purely functional specifications are efficient, we have to account for applications being provided from diverse and non-coordinated sources. Their distribution (e.g. over the Web) must change the way we verify and manage them. In particular, one cannot ignore the impact of quantitative features such as delays or failure likelihoods on the functionalities of composite services in distributed systems.

We thus identify three main characteristics of complex distributed systems that constitute research challenges:

The increasing size and the networked nature of communication systems,
controls, distributed services, etc. confront us with an ever higher degree
of parallelism between local processes. This field of application for
our work includes telecommunication systems and composite web
services. The challenge is to provide sound theoretical foundations and
efficient algorithms for management of such systems, ranging from
controller synthesis and fault diagnosis to integration and adaptation.
While these tasks have received considerable attention in the
sequential setting, managing non-sequential behavior requires
profound modifications for existing approaches, and often the development
of new approaches altogether. We see concurrency in distributed systems as
an opportunity rather than a nuisance. Our goal is to exploit
asynchronicity and distribution as an advantage. Clever use of adequate
models, in particular partial order semantics (ranging from
Mazurkiewicz traces to event structures to MSCs) actually helps in
practice. In fact, the partial order vision allows us to make causal
precedence relations explicit, and to perform diagnosis and test for the
dependency between events. This is a conceptual advantage that
interleaving-based approaches cannot match. The two key features of our
work will be (i) the exploitation of concurrency by using
asynchronous models with partial order semantics, and (ii)
distribution of the agents performing management tasks.

Systems and services exhibit non-trivial interaction between
specialized and heterogeneous components. A coordinated interplay of several
components is required; this is challenging since each of them has only a limited, partial view of the
system's configuration. We refer to this problem as distributed
synthesis or distributed control. An aggravating factor is that
the structure of a component might be semi-transparent, which requires a
form of grey box management.

Besides the logical functionalities of programs, the quantitative
aspects of component behavior and interaction play an increasingly
important role.

Since the creation of MExICo, the weight of quantitative aspects in
all parts of our activities has grown, be it in terms of the models considered
(weighted automata and logics), be it in transforming verification or diagnosis verdict
into probabilistic statements (probabilistic diagnosis, statistical model checking),
or within the recently started SystemX cooperation on supervision in
multi-modal transport systems.
This trend is certain to continue over the next couple of years, along with
the growing importance of diagnosis and control issues.

In another development, the theory and use of partial order semantics has gained momentum in the past four years, and we intend to further strengthen our efforts and contacts in this domain to further develop and apply partial-order based deduction methods.

When no complete model of the underlying dynamic system is available, the analysis
of logs may allow to reconstruct such a model, or at least to infer some properties of interest; this activity,
which has emerged over the past 10 years on the international level, is referred to as process mining. In this emerging activity, we
have contributed to unfolding-based process discovery [CI-146], and the study of process alignments
[CI-121, CI-96, CI-83, CI-60, CI-33].

Finally, over the past years biological challenges have come to the center of our work, in two different directions:

Process mining @ MExICo
The use of process models has increased in the last decade due to the advent of the process mining field. Process mining techniques aim at discovering, analyzing and enhancing formal representations of the real processes executed in any digital environment. These processes can only be observed by the footprints of their executions, stored in form of event logs. An event log is a collection of traces and is the input of process mining techniques. The derivation of an accurate formalization of an underlying process opens the door to the continuous
improvement and analysis of the processes within an information system.

Process models often use true concurrency to represent actions that appear in logs with different permutations.

Among the important challenges in process mining, conformance checking is a crucial one: to assess the quality of a model (automatically discovered or manually designed) in describing the observed behavior, i.e., the event log.

MExICo contributes to process mining, a field which discovers and manipulates true concurrency models and questions about their conformance to recorded event logs.

It is well known that, whatever the intended form of analysis or control, a
global view of the system state leads to overwhelming numbers of
states and transitions, thus slowing down algorithms that need to explore
the state space. Worse yet, it often blurs the mechanics that are at work
rather than exhibiting them. Conversely, respecting concurrency relations
avoids exhaustive enumeration of interleavings. It allows us to focus on
`essential' properties of non-sequential processes, which are expressible
with causal precedence relations. These precedence relations are usually
called causal (partial) orders. Concurrency is the explicit absence of
such a precedence between actions that do not have to wait for one another.
Both causal orders and concurrency are in fact essential elements of a
specification. This is especially true when the specification is
constructed in a distributed and modular way. Making these ordering
relations explicit requires to leave the framework of state/interleaving
based semantics. Therefore, we need to develop new dedicated algorithms
for tasks such as conformance testing, fault diagnosis, or control for
distributed discrete systems. Existing solutions for these problems often
rely on centralized sequential models which do not scale up well.

for discrete event systems is a crucial task in automatic control. Our focus is on

(as opposed to

) model-based diagnosis, asking e.g. the following questions:

Depending on the possible observations, a discrete-event system may be diagnosable or not. Active diagnosis aims at controlling the system to render it diagnosable. We have established in a memory-optimal diagnoser whose delay is at most twice the minimal delay, whereas the memory required to achieve optimal delay may be highly greater. We have also provided solutions for parametrized active diagnosis, where we automatically construct the most permissive controller respecting a given delay. Further, we introduced four variants of
diagnosability (FA, IA, FF, IF)
in (finite) probabilistic systems (pLTS) depending whether one
considers (1) finite or infinite runs and (2) faulty or all runs. The corresponding
decision problems are PSPACE-complete. A key ingredient of the
decision procedures was a characterisation of diagnosability by the
fact that a random run almost surely lies in an open set whose
specification only depends on the qualitative behaviour of the pLTS.
For infinite pLTS, this characterisation still holds for
FF-diagnosability but with a IF-and IA-diagnosability
when pLTS are finitely branching. Surprisingly,
FA-diagnosability cannot be characterised in this way even
in the finitely branching case.
Further extensions are under way, in particular in passing to prediction and prevention of faults prior to their occurrence.

In asynchronous partial-order based diagnosis with Petri nets, one unfolds the
labelled product of a Petri net model (configurations) that explain exactly

Diagnosis algorithms have to operate in contexts with low observability,
i.e., in systems where many events are invisible to the supervisor.
Checking observability and diagnosability for the
supervised systems is therefore a crucial and non-trivial task in its own
right. Analysis of the relational structure of occurrence nets allows us
to check whether the system exhibits sufficient visibility to allow
diagnosis. Developing efficient methods for both verification of
diagnosability checking under concurrency, and the diagnosis
itself for distributed, composite and asynchronous systems, is an important
field for the team. In 2019,
a new property, manifestability, weaker than diagnosability (dual in some sense to opacity) has been studied in the context of automata and timed automata.

Distributed computation of unfoldings allows one to factor the unfolding of
the global system into smaller local unfoldings, by local
supervisors associated with sub-networks and communicating among each other.
In , , elements of a methodology for distributed computation of unfoldings between several supervisors, underwritten by algebraic
properties of the category of Petri nets have been developed. Generalizations, in particular
to Graph Grammars, are still do be done.

Computing diagnosis in a distributed way is only one aspect of a much
vaster topic, that of distributed diagnosis (see
, ). In fact, it involves a
more abstract and often indirect reasoning to conclude whether or not some
given invisible fault has occurred. Combination of local scenarios is in
general not sufficient: the global system may have behaviors that do not
reveal themselves as faulty (or, dually, non-faulty) on any local
supervisor's domain (compare , ).
Rather, the local
diagnosers have to join all information that is available to them
locally, and then deduce collectively further information from the
combination of their views. In particular, even the absence of
fault evidence on all peers may allow to deduce fault occurrence jointly, see
, .
Automatizing such procedures for the supervision and management of
distributed and locally monitored asynchronous systems is a long-term goal
to which MExICo hopes to contribute.

Hybrid systems constitute a model for cyber-physical systems which integrates continuous-time dynamics (modes) governed by differential equations, and discrete transitions which switch instantaneously from one mode to another. Thanks to their ease of programming, hybrid systems have been integrated to power electronics systems, and more generally in cyber-physical systems. In order to guarantee that such systems meet their specifications, classical methods consist in finitely abstracting the systems by discretization of the (infinite) state space, and deriving automatically the appropriate mode control from the specification using standard graph techniques.

Diagnosability of hybrid systems has also been studied through an abstraction / refinement process in terms of timed automata.

Assuring the correctness of concurrent systems is notoriously difficult due to the many unforeseeable ways in which the components may interact and the resulting state-space explosion. A well-established approach to alleviate this problem is to model concurrent systems as Petri nets and analyse their unfoldings, essentially an acyclic version of the Petri net whose simpler structure permits easier analysis

.

However, Petri nets are inadequate to model concurrent read accesses to the same resource. Such situations often arise naturally, for instance in concurrent databases or in asynchronous circuits. The encoding tricks typically used to model these cases in Petri nets make the unfolding technique inefficient. Contextual nets, which explicitly do model concurrent read accesses, address this problem. Their accurate representation of concurrency makes contextual unfoldings up to exponentially smaller in certain situations. An abstract algorithm for contextual unfoldings was first given in . In recent work, we further studied this subject from a theoretical and practical perspective, allowing us to develop concrete, efficient data structures and algorithms and a tool (Cunf) that improves upon existing state of the art. This work led to the PhD thesis of César Rodríguez in 2014 .

Contextual unfoldings deal well with two sources of state-space explosion:
concurrency and shared resources. Recently, we proposed an improved data
structure, called contextual merged processes (CMP) to deal with
a third source of state-space explosion, i.e. sequences of choices.
The work on CMP is currently at an abstract level.
In the short term, we want to put this work into practice, requiring some
theoretical groundwork, as well as programming and experimentation.

Another well-known approach to verifying concurrent systems is
partial-order reduction, exemplified by the tool SPIN.
Although it is known that both partial-order reduction and unfoldings
have their respective strengths and weaknesses, we are not aware of any
conclusive comparison between the two techniques. Spin comes
with a high-level modeling language having an explicit notion of processes,
communication channels, and variables. Indeed, the reduction techniques
implemented in Spin exploit the specific properties of these features.
On the other side, while there exist highly efficient tools for unfoldings,
Petri nets are a relatively general low-level formalism, so these techniques
do not exploit properties of higher language features. Our work on contextual
unfoldings and CMPs represents a first step to make unfoldings exploit
richer models. In the long run, we wish raise the unfolding technique to a
suitable high-level modelling language and develop appropriate tool support.

MExICo introduced anti-alignments as a tool for conformance checking. The idea of anti-alignment is to search, for a model

MExICo has also been contributing to clustering of log traces.

Perspectives about process mining in MExICo include model repair, i.e. design and implementation of techniques to incrementally improve models in order to make them fit better to observed logs, including when the log itself grows continuously.

Another direction is to handle models which manipulate data and real time, in order to propose more accurate representation of the log traces when the events carry some additional information (time stamps, identifiers, quantities, costs...)

Besides the logical functionalities of programs, the quantitative
aspects of component behavior and interaction play an increasingly
important role.

Traditional mainframe systems were proprietary and (essentially) localized;
therefore, impact of delays, unforeseen failures, etc. could be considered
under the control of the system manager. It was therefore natural, in
verification and control of systems, to focus on functional
behavior entirely.

With the increase in size of computing system and the growing degree of compositionality and distribution, quantitative factors enter the stage:

Time and probability are thus parameters
that management of distributed systems must
be able to handle; along with both, the cost of operations is often subject to restrictions,
or its minimization is at least desired.
The mathematical treatment of these features in
distributed systems is an important challenge,
which MExICo is addressing; the following describes our activities concerning probabilistic and
timed systems. Note that cost optimization is not a current activity but enters the picture in several intended activities.

Practical fault diagnosis requires to select explanations
of maximal likelihood. For partial-order based diagnosis,
this leads therefore to the question what the
probability of a given partially ordered execution is.
In Benveniste et al. , , we presented a model of stochastic processes, whose trajectories are partially ordered, based on local branching in Petri net unfoldings;
an alternative and complementary model based on
Markov fields is developed in ,
which takes a different view on the semantics
and overcomes the first model's restrictions on applicability.

Both approaches
abstract away from real time progress and randomize choices in logical time. On the other hand, the relative speed - and thus, indirectly, the real-time behavior of the system's local processes - are crucial factors determining the outcome of probabilistic choices, even if
non-determinism is absent from the system.

Distributed systems featuring non-deterministic and probabilistic aspects are usually hard to analyze and, more specifically, to optimize. Furthermore, high complexity theoretical lower bounds have been established for models like partially observed Markovian decision processes and distributed partially observed Markovian decision processes. We believe that these negative results are consequences of the choice of the models rather than the intrinsic complexity of problems to be solved. Thus we plan to introduce new models in which the associated optimization problems can be solved in a more efficient way. More precisely, we start by studying connection protocols weighted by costs and we look for online and offline strategies for optimizing the mean cost to achieve the protocol. We have been cooperating on this subject with the SUMO team at INRIA Rennes; in the joint work

; there, we strive to synthesize for a given MDP a control so as to guarantee a specific stationary behavior, rather than - as is usually done - so as to maximize some reward.

Addressing large-scale probabilistic systems requires to face state explosion, due to both the discrete part and the probabilistic part of the model. In order to deal with such systems, different approaches have been proposed:

We want to contribute to these three axes: (1) we are looking for product-forms related to systems where synchronization are more involved (like in Petri nets ); (2) we want to adapt methods for discrete-event systems that require some theoretical developments in the stochastic framework and, (3) we plan to address some important limitations of statistical model checking like the expressiveness of the associated logic and the handling of rare events.

Nowadays, software systems largely depend on complex timing constraints and usually consist of many interacting local components. Among them, railway crossings, traffic control units, mobile phones, computer servers, and many more safety-critical systems are subject to particular quality standards. It is therefore becoming increasingly important to look at networks of timed systems, which allow real-time systems to operate in a distributed manner.

Timed automata are a well-studied formalism to describe reactive systems that come with timing constraints. For modeling distributed real-time systems, networks of timed automata have been considered, where the local clocks of the processes usually evolve at the same rate . It is, however, not always adequate to assume that distributed components of a system obey a global time. Actually, there is generally no reason to assume that different timed systems in the networks refer to the same time or evolve at the same rate. Any component is rather determined by local influences such as temperature and workload.

MExICo’s research is motivated by problems of system management in several domains, such as:

Currently, we have no active cooperation on these subjects.

We have begun in 2014 to examine concurrency issues in systems biology, and are currently enlarging the scope of our research’s applications in this direction. To see the context, note that in recent years, a considerable shift of biologists’ interest can be observed, from the mapping of static genotypes to gene expression, i.e. the processes in which genetic information is used in producing functional products. These processes are far from being uniquely determined by the gene itself, or even jointly with static properties of the environment; rather, regulation occurs throughout the expression processes, with specific mechanisms increasing or decreasing the production of various products, and thus modulating the outcome. These regulations are central in understanding cell fate (how does the cell differenciate ? Do mutations occur ? etc), and progress there hinges on our capacity to analyse, predict, monitor and control complex and variegated processes. We have applied Petri net unfolding techniques for the efficient computation of attractors in a regulatory network; that is, to identify strongly connected reachability components that correspond to stable evolutions, e.g. of a cell that differentiates into a specific functionality (or mutation). This constitutes the starting point of a broader research with Petri net unfolding techniques in regulation. In fact, the use of ordinary Petri nets for capturing regulatory network (RN) dynamics overcomes the limitations of traditional RN models : those impose e.g. Monotonicity properties in the influence that one factor had upon another, i.e. always increasing or always decreasing, and were thus unable to cover all actual behaviours. Rather, we follow the more refined model of boolean networks of automata, where the local states of the different factors jointly detemine which state transitions are possible. For these connectors, ordinary PNs constitute a first approximation, improving greatly over the literature but leaving room for improvement in terms of introducing more refined logical connectors. Future work thus involves transcending this class of PN models. Via unfoldings, one has access – provided efficient techniques are available – to all behaviours of the model, rather than over-or under-approximations as previously. This opens the way to efficiently searching in particular for determinants of the cell fate : which attractors are reachable from a given stage, and what are the factors that decide in favor of one or the other attractor, etc. Our current research focusses cellular reprogramming on the one hand, and distributed algorithms in wild or synthetic biological systems on the other. The latter is a distributed algorithms’ view on microbiological systems, both with the goal to model and analyze existing microbiological systems as distributed systems, and to design and implement distributed algorithms in synthesized microbiological systems. Envisioned major long-term goals are drug production and medical treatment via synthesized bacterial colonies. We are approaching our goal of a distributed algorithm’s view of microbiological systems from several directions: (i) Timing plays a crucial role in microbiological systems. Similar to modern VLSI circuits, dominating loading effects and noise render classical delay models unfeasible. In previous work we showed limitations of current delay models and presented a class of new delay models, so called involution channels. In [26] we showed that involution channels are still in accordance with Newtonian physics, even in presence of noise. (ii) In [7] we analyzed metastability in circuits by a three-valued Kleene logic, presented a general technique to build circuits that can tolerate a certain degree of metastability at its inputs, and showed the presence of a computational hierarchy. Again, we expect metastability to play a crucial role in microbiological systems, as similar to modern VLSI circuits, loading effects are pronounced. (iii) We studied agreement problems in highly dynamic networks without stability guarantees [28], [27]. We expect such networks to occur in bacterial cultures where bacteria communicate by producing and sensing small signal molecules like AHL. Both works also have theoretically relevant implications: The work in [27] presents the first approximate agreement protocol in a multidimensional space with time complexity independent of the dimension, working also in presence of Byzantine faults. In [28] we proved a tight lower bound on convergence rates and time complexity of asymptotic and approximate agreement in dynamic and classical static fault models. (iv) We are currently working with Manish Kushwaha (INRA), and Thomas Nowak (LRI) on biological infection models for E. coli colonies and M13 phages.

In the context of the Escape project (PhD thesis of G.K. Aguirre Samboni, started in October 2020) we are now extending our research on causal analysis of complex biological networks to the domain of ecosystems.

The carbon footprint of our activities is generic for office work, and probably strongest in traveling. While the latter came essential to a halt in 2020 because of the Covid pandemic, we believe that even in the future, intelligent use of online cooperation and communication can help limit the inevitable footprint of travel to the crucial activities of cooperation and networking, avoiding physical meetings when possible.

With our Project ESCAPE, we are hoping for a strong impact on ecosystem analysis and management. Further, the research on biological regulation networks has the potential for enabling e.g. evaluation and design of medical therapies in epigenetic contexts.

The

Multiconference involved

as general chair,

in the organisation and as webmaster, and

as program co-chair of

.

MEXICo has made substantial scientific progress in the domains of

Conformance checking is a growing discipline that aims at assisting organizations in monitoring their processes. On its core, conformance checking relies on the computation of particular artefacts which enable reasoning on the relation between observed and modeled behavior. It is widely acknowledge that the computation of these artifacts is the lion’s share of conformance checking techniques. Our paper

shows how important conformance artefacts like alignments, anti-alignments or multi-alignments, defined over the Levenshtein edit distance, can be efficiently computed by encoding the problem as an optimized SAT instance. From a general perspective, the work advocates for a unified family of techniques that can compute conformance artefacts in the same way. The implementation of the techniques presented in this paper show capabilities for dealing with both synthetic and real-life instances, which may open the door for a fresh way of applying conformance checking in the near future.

Processes are a crucial artefact in organizations, since they coordinate the execution of activities so that products and services are provided. The use of models to analyse the underlying processes is a well-known practice. However, due to the complexity and continuous evolution of their processes, organizations need an effective way of analysing the relation between processes and models. Conformance checking techniques asses the suitability of a process model in representing an underlying process, observed through a collection of real executions. One important metric in conformance checking is to asses the precision of the model with respect to the observed executions, i.e., characterize the ability of the model to produce behavior unrelated to the one observed. In

, we present the notion of anti-alignment as a concept to help unveiling runs in the model that may deviate significantly from the observed behavior. Using anti-alignments, a new metric for precision is proposed. In contrast to existing metrics, anti-alignment based precision metrics satisfy most of the required axioms highlighted in a recent publication. Moreover, a complexity analysis of the problem of computing anti-alignments is provided, which sheds light into the practicability of using anti-alignment to estimate precision. Experiments are provided that witness the validity of the concepts introduced.

Alignments are a central notion in conformance checking. They establish the best possible connection between an observed trace and a process model, exhibiting the closest model run to the trace. Computing these alignments for huge amounts of traces, coming from big logs, is a computational bottleneck. We show in

that, for a slightly modified version of the distance function between traces and model runs, we significantly improve the execution time of an A*-based search algorithm. We show experimentally that the alignments found with our modified distance approximate very nicely the optimal alignments for the classical distance. In

, by introducing a discount factor in the edit distance used for the search of anti-alignments, we obtain the first efficient algorithm to approximate them. We show how this approximation is quite accurate in practice, by comparing it with the optimal results for small instances where the optimal algorithm can also compute anti-alignments. Finally, we compare the obtained precision metric with respect to the state-of-the-art metrics in the literature for real-life examples.

In

, we present (i) an active learning algorithm for visibly pushdown grammars and (ii) show its applicability for learning surrogate models of recurrent neural networks (RNNs) trained on context-free languages. Such surrogate models may be used for verification or explainability. Our learning algorithm makes use of the proximity of visibly pushdown languages and regular tree languages and builds on an existing learning algorithm for regular tree languages. Equivalence tests between a given RNN and a hypothesis grammar rely on a mixture of A* search and random sampling. An evaluation of our approach on a set of RNNs from the literature shows good preliminary results.

Karp and Miller’s algorithm is based on an exploration of the reachability tree of a Petri net where, the sequences of transitions with positive incidence are accelerated. The tree nodes of Karp and Miller are labeled with ω-markings representing (potentially infinite) coverability sets. This set of ω-markings allows us to decide several properties of the Petri net, such as whether a marking is coverable or whether the reachability set is finite. The edges of the Karp and Miller tree are labeled by transitions but the associated semantic is unclear which yields to a complex proof of the algorithm correctness. In , we introduce three concepts: abstraction, acceleration and exploration sequence. In particular, we generalize the definition of transitions to ω-transitions in order to represent accelerations by such transitions. The notion of abstraction makes it possible to greatly simplify the proof of the correctness. On the other hand, for an additional cost in memory, which we theoretically evaluated, we propose an “accelerated” variant of the Karp and Miller algorithm with an expected gain in execution time. Based on a similar idea we have accelerated (and made complete) the minimal coverability graph construction, implemented it in a tool and performed numerous promising benchmarks issued from realistic case studies and from a random generator of Petri nets.

The problem of distributed synthesis is to automatically generate a distributed algorithm, given a target communication network and a specification of the algorithm's correct behavior. Previous work has focused on static networks with an a priori fixed message size. This approach has two shortcomings: Recent work in distributed computing is shifting towards dynamically changing communication networks rather than static ones, and an important class of distributed algorithms are so-called full-information protocols, where nodes piggy-pack previously received messages onto current messages. In

, we consider the synthesis problem for a system of two nodes communicating in rounds over a dynamic link whose message size is not bounded. Given a network model, i.e., a set of link directions, in each round of the execution, the adversary choses an arbitrary link from the network model, restricted only by the specification, and delivers messages according to the current link's directions. Motivated by communication buses with direct acknowledge mechanisms, we further assume that nodes are aware of which messages have been delivered. We show that the synthesis problem is decidable for a network model if and only if the network model does not contain the empty link that dismisses both nodes' messages. We then extend the characterization to sequences of communication links that may contain empty links. We show that the synthesis problem is decidable in this case if and only if the number of consecutive empty links in all possible sequences is uniformly bounded from above.

Computing via synthetically engineered bacteria is a vibrant and active field with numerous applications in bio-production, bio-sensing, and medicine. Motivated by the lack of robustness and by resource limitation inside single cells, distributed approaches with communication among bacteria have recently gained in interest. In

, we focus on the problem of population growth happening concurrently, and possibly interfering, with the desired bio-computation. Specifically, we present a fast protocol in systems with continuous population growth for the majority consensus problem and prove that it correctly identifies the initial majority among two inputs with high probability if the initial difference grows with the square root of n log n, where n is the total initial population. We also present a fast protocol that correctly computes the Nand of two inputs with high probability. By combining Nand gates with the majority consensus protocol as an amplifier, it is possible to compute arbitrary Boolean functions. Finally, we extend the protocols to several biologically relevant settings. We simulate a plausible implementation of a noisy Nand gate with engineered bacteria. In the context of continuous cultures with a constant outflow and a constant inflow of fresh media, we demonstrate that majority consensus is achieved only if the flow is slower than the maximum growth rate. Simulations suggest that flow increases consensus time over a wide parameter range. The proposed protocols help set the stage for bio-engineered distributed computation that directly addresses continuous stochastic population growth.

Having a precise and stable clock that is still fault tolerant is a fundamental prerequisite in safety critical real-time systems. However, combining redundant independent clock sources to form a unified fault-tolerant clock supply is non-trivial, especially when redundant clock outputs are required-e.g., for supplying the replicated nodes within a TMR architecture through a clock network that does not suffer from a single point of failure. Having these outputs fail independent but still keeping them tightly synchronized is highly desirable, as it substantially eases the design of the overall architecture. In

we address exactly this challenge. Our approach extends an existing, ring-oscillator like distributed clock generation scheme by augmenting each of its constituent nodes with a stable clock reference. We introduce the appropriately modified algorithm and illustrate its operation by simulation experiments. These experiments further demonstrate that the four clock outputs of our circuit do not share a single point of failure, have small and bounded skew, remain stabilized to one crystal source during normal operation, do not propagate glitches from one failed clock to a correct one, and only exhibit slightly extended clock cycles during a short stabilization period after a component failure. In addition we give a rigorous formal proof for the correctness of the algorithm on an abstraction level that is close to the implementation.

Agreeing on a common value among a set of agents is a fundamental problem in distributed computing, which occurs in several variants: In contrast to exact consensus, approximate variants are studied in systems where exact agreement is not possible or required, e.g., in man-made distributed control systems and in the analysis of natural distributed systems, such as bird flocking and opinion dynamics. In

, we study the time complexity of two classical agreement problems: non-terminating asymptotic consensus and terminating approximate consensus. Asymptotic consensus, requires agents to repeatedly set their outputs such that the outputs converge to a common value within the convex hull of initial values; approximate consensus requires agents to eventually stop setting their outputs, which must then lie within a predefined distance of each other. We prove tight lower bounds on the contraction ratios of asymptotic consensus algorithms subject to oblivious message adversaries, from which we deduce bounds on the time complexity of approximate consensus algorithms. In particular, the obtained bounds show optimality of asymptotic and approximate consensus algorithms presented by Charron-Bost et al. (ICALP'16) for certain systems, including the strongest oblivious message adversary in which asymptotic and approximate consensus are solvable. As a corollary we also obtain asymptotically tight bounds for asymptotic consensus in the classical asynchronous model with crashes. Central to the lower-bound proofs is an extended notion of valency, the set of reachable limits of an asymptotic consensus algorithm starting from a given configuration. We further relate topological properties of valencies to the solvability of exact consensus, shedding some light on the relation of these three fundamental problems in dynamic networks.

In

, we introduce the Composable Involution Delay Model (CIDM) for fast and accurate digital simulation. It is based on the Involution Delay Model (IDM) [Függer et al., IEEE TCAD 2020], which has been shown to be the only existing candidate model for faithful glitch propagation. The IDM, however, has shortcomings that limit its applicability. Our CIDM thus reduces the characterization effort by allowing independent discretization thresholds, improves composability and increases the modeling power by exposing canceled pulse trains at the gate interconnect. We formally show that, despite these improvements, the CIDM still retains the IDM's faithfulness.

.