Hycomes was created as a local team of the Rennes - Bretagne Atlantique Inria research center in 2013 and has been created as an Inria Project-Team in 2016. The team is focused on two topics in cyber-physical systems design:

Systems industries today make extensive use of mathematical modeling tools to design computer controlled physical systems. This class of tools addresses the modeling of physical systems with models that are simpler than usual scientific computing problems by using only Ordinary Differential Equations (ODE) and Difference Equations but not Partial Differential Equations (PDE). This family of tools first emerged in the 1980's with SystemBuild by MatrixX (now distributed by National Instruments) followed soon by Simulink by Mathworks, with an impressive subsequent development.

In the early 90's control scientists from the University of Lund (Sweden) realized that the above approach did not support component based modeling of physical systems with reuse 1. For instance, it was not easy to draw an electrical or hydraulic circuit by assembling component models of the various devices. The development of the Omola language by Hilding Elmqvist was a first attempt to bridge this gap by supporting some form of Differential Algebraic Equations (DAE) in the models. Modelica quickly emerged from this first attempt and became in the 2000's a major international concerted effort with the Modelica Consortium. A wider set of tools, both industrial and academic, now exists in this segment 2. In the Electronic Design Automation (EDA) sector, VHDL-AMS was developed as a standard 71 and also enables the use of differential algebraic equations. Several domain-specific languages and tools for mechanical systems or electronic circuits also support some restricted classes of differential algebraic equations. Spice is the historic and most striking instance of these domain-specific languages/tools 3. The main difference is that equations are hidden and the fixed structure of the differential algebraic results from the physical domain covered by these languages.

Despite the fact that these tools are now widely used by a number of engineers, they raise a number of technical difficulties. The meaning of some programs, their mathematical semantics, is indeed ambiguous. A main source of difficulty is the correct simulation of continuous-time dynamics, interacting with discrete-time dynamics: How the propagation of mode switchings should be handled? How to avoid artifacts due to the use of a global ODE solver causing unwanted coupling between seemingly non interacting subsystems? Also, the mixed use of an equational style for the continuous dynamics with an imperative style for the mode changes and resets, is a source of difficulty when handling parallel composition. It is therefore not uncommon that tools return complex warnings for programs with many different suggested hints for fixing them. Yet, these “pathological” programs can still be executed, if wanted so, giving surprising results — See for instance the Simulink examples in 28, 20 and 21.

Indeed this area suffers from the same difficulties that led to the development of the theory of synchronous languages as an effort to fix obscure compilation schemes for discrete time equation based languages in the 1980's. Our vision is that hybrid systems modeling tools deserve similar efforts in theory as synchronous languages did for the programming of embedded systems.

Non-Standard analysis plays a central role in our research on hybrid systems modeling 20, 28, 22, 21, 26, 3. The following text provides a brief summary of this theory and gives some hints on its usefulness in the context of hybrid systems modeling. This presentation is based on our paper 2, a chapter of Simon Bliudze's PhD thesis 35, and a recent presentation of non-standard analysis, not axiomatic in style, due to the mathematician Lindström 80.

Non-standard numbers allowed us to reconsider the semantics of hybrid
systems and propose a radical alternative to the super-dense
time semantics developed by Edward Lee and his team as part of the
Ptolemy II project, where cascades of successive instants can occur in
zero time by using infinitesimal and non-standard
integers. Remark that (1) non-standard semantics
provides a framework that is familiar to the computer
scientist and at the same time efficient as a symbolic
abstraction. This makes it an excellent candidate for the development
of provably correct compilation schemes and type systems for hybrid
systems modeling languages.

Non-standard analysis was proposed by Abraham Robinson in the 1960s to allow the explicit manipulation of “infinitesimals” in analysis 92, 58, 54. Robinson's approach is axiomatic; he proposes adding three new axioms to the basic Zermelo-Fraenkel (ZFC) framework. While the need for non-standard analysis (in addition to the usual or standard analysis) has long agitated the mathematical community, it is not our purpose to debate such aspects. The important thing for us is that non-standard analysis allows the use of the non-standard discretization of continuous dynamics “as if” it was operational.

Not surprisingly, such an idea is not novel. Iwasaki et al. 73 first proposed using non-standard analysis to discuss the nature of time in hybrid systems. Bliudze and Krob 34, 35 have also used non-standard analysis as a mathematical support for defining a system theory for hybrid systems. They discuss in detail the notion of “system” and investigate computability issues. The formalization they propose closely follows that of Turing machines, with a memory tape and a control mechanism.

The Modelica language is based on Differential Algebraic Equations (DAE). The general form of a DAE is given by:

where

Let leading variables of

The state variables of

A leading variable algebraic
if

DAE are a strict generalization of ordinary differential
equations (ODE), in the sense that it may not be immediate
to rewrite a DAE as an explicit ODE of the form

For a square DAE of dimension

can locally be made explicit, i.e., the Jacobian matrix of differentiation
index 46 of

In practice, the problem of automatically finding a minimal
solution structural nonsingularity of the Jacobian matrix, i.e., its
almost certain nonsingularity when its nonzero entries vary over some
neighborhood. In this framework, the structural analysis
(SA) of a DAE returns, when successful, values of the

A renowned method for the SA of DAE is the Pantelides method;
however, Pryce's $\Sigma $-method is introduced also in what
follows, as it is a crucial tool for our works.

In 1988, Pantelides proposed what is probably the most well-known SA method for DAE 88. The main idea of his work is that the structural representation of a DAE can be condensed into a bipartite graph whose left nodes (resp. right nodes) represent the equations (resp. the variables), and in which an edge exists if and only if the variable occurs in the equation.

By detecting specific subsets of the nodes, called Minimally
Structurally Singular (MSS) subsets, the Pantelides method
iteratively differentiates part of the equations until a perfect
matching between the equations and the leading variables is found. One
can easily prove that this is a necessary and sufficient condition for
the structural nonsingularity of the system.

The main reason why the Pantelides method is not used in our work is that it cannot efficiently be adapted to multimode DAE (mDAE). As a matter of fact, the adjacency graph of a mDAE has both its nodes and edges parametrized by the subset of modes in which they are active; this, in turn, requires that a parametrized Pantelides method must branch every time no mode-independent MSS is found, ultimately resulting, in the worst case, in the enumeration of modes.

Albeit less renowned that the Pantelides method, Pryce's
$\Sigma $-matrix, or

This matrix is given by:

where

The primal problem consists in finding a maximum-weight
perfect matching (MWPM) in the weighted adjacency
graph. This is actually an assignment problem for
which several standard algorithms exist, such as the push-relabel
algorithm 67 or the Edmonds-Karp
algorithm 60 to only give a few. However, none of
these algorithms are easily parametrizable, even for applications to
mDAE systems with a fixed number of variables.

The dual problem consists in finding the component-wise minimal
solution fixpoint
iteration (FPI) that makes use of the MWPM found as a
solution to the primal problem, described by the set of tuples

From the results proved by Pryce in 89, it is known
that the above algorithm terminates if and only if it is provided a
MWPM, and that the values it returns are independent of the choice of
a MWPM whenever there exist several such matchings. In particular, a
direct corollary is that the

Another important result is that, if the Pantelides method succeeds
for a given DAE

Working with this method is natural for our works, since the algorithm for solving the dual problem is easily parametrizable for dealing with multimode systems, as shown in our recent paper 45.

Once structural analysis has been performed, system

System companies such as automotive and aeronautic companies are facing significant difficulties due to the exponentially raising complexity of their products coupled with increasingly tight demands on functionality, correctness, and time-to-market. The cost of being late to market or of imperfections in the products is staggering as witnessed by the recent recalls and delivery delays that many major car and airplane manufacturers had to bear in the recent years. The root causes of these design problems are complex and relate to a number of issues ranging from design processes and relationships with different departments of the same company and with suppliers, to incomplete requirement specification and testing.

We believe the most promising means to address the challenges in systems engineering is to employ formal design methodologies that seamlessly and coherently combine the various viewpoints of the design space (behavior, time, energy, reliability, ...), that provide the appropriate abstractions to manage the inherent complexity, and that can provide correct-by-construction implementations. The following issues must be addressed when developing new approaches to the design of complex systems:

The challenge is to address the entire process and not to consider only local solutions of methodology, tools, and models that ease part of the design.

Contract-based design has been proposed as a new approach to
the system design problem that is rigorous and effective in dealing
with the problems and challenges described before, and that, at the
same time, does not require a radical change in the way industrial
designers carry out their task as it cuts across design flows of
different types.
Indeed, contracts can be used almost everywhere and at nearly all
stages of system design, from early requirements capture, to embedded
computing infrastructure and detailed design involving circuits and
other hardware. Intuitively, a contract captures two properties,
respectively representing the assumptions on the environment and the
guarantees of the system under these assumptions. Hence, a
contract can be defined as a pair

A detailed bibliography on contract and interface theories for embedded system design can be found in 5. In a nutshell, contract and interface theories fall into two main categories:

Requirements Engineering is one of the major concerns in large systems industries today, particularly so in sectors where certification prevails 93. Most requirements engineering tools offer a poor structuring of the requirements and cannot be considered as formal modeling frameworks today. They are nothing less, but nothing more than an informal structured documentation enriched with hyperlinks.

We see Contract-Based Design and Interfaces Theories as innovative tools in support of Requirements Engineering. The Software Engineering community has extensively covered several aspects of Requirements Engineering, in particular:

Behavioral models and properties, however, are not properly encompassed by the above approaches. This is the cause of a remaining gap between this phase of systems design and later phases where formal model based methods involving behavior have become prevalent. We believe that our work on contract-based design and interface theories is best suited to bridge this gap.

This project consists in exploiting the parsimony of sparse systems to accelerate their symbolic manipulation (quantifiers elimination 53, differential-algebraic reductions 94 etc.).
Let us cite two typical examples as a motivation: Boolean functions (

The current algorithms suffer from a theoretical complexity that is at best exponential (in the size of the input) limiting their use to instances of very modest size. The classic approach to overcome this problem is to develop/use numerical methods (with their limits and intrinsic problems) when possible of course. We aim to explore a different avenue.

In this project, we wish to exploit the structure of sparse systems to push the symbolic approach beyond its theoretical limits. The a priori limited application of our methods for dense systems is compensated by the fact that in practice, the problems are very often structured (in this regard, let us content ourselves with quoting the SAT solvers which successfully tackle industrial instances of a theoretically NP-complete problem).

The idea of exploiting the structure to speed up calculations that are a priori complex is not new. It has notably been developed and successfully used in signal processing via Factor Graphs 81, where one restricts oneself to local propagation of information, guided by an abstract graph which represents the structure of the system overall. Our approach is similar: we basically seek to use expensive algorithms sparingly on only subsystems involving only a small number of variables, thus hoping to reduce the theoretical worst case. One could then legitimately wonder why it is not enough to apply what has already been done on Factor Graphs? The difficulty (and the novelty for that matter) lies in the implementation of this idea for the problems that interest us. Let's start by emphasizing that the propagation of information has a significantly different impact depending on the operator (or quantifier) to be eliminated: a minimization or a summation do not look like a projection at all! This will obviously not prevent us from importing good ideas applicable to our problems and vice versa.

More related to symbolic computation, to our knowledge, at least two recent attempts exist: chordal networks 51 which propose a representation of the ideals of the ring of polynomials (therefore algebraic sets), and triangular block shapes 97, initiated independently and under development in our team and which tackle Boolean functions, or, if you will, the algebraic sets over the field of Booleans. The similarity between the two approaches is striking and suggests that there is a common way of doing things that could be exploited beyond these two examples. It is this unification that interests us in the first place in this project.

We identify three research problems to explore:
T1. Unify several optimization problems on graphs as a single problem parameterized by a cost function, we coin such a problem WAP, for weighted adjacency propagation.
T2. Adapt (and possibly improve) the algorithm of 96 to WAP and consequently to all instances of the single problem detailed in T1.
T3. Propose a unified and modular method consisting of: (1) an elimination algorithm, (2) a data structure and (3) an efficient algorithm to solve the problem (with an adequate cost function).

The work on chordal networks and our work on Boolean functions immediately become special cases. For example, for Boolean functions, one could use Binary Decision Diagrams (BDDs) 41 to represent each piece of the initial system. In fact, the final representation will no longer be a single monolithic BDD as is currently the case, but rather a graph of BDDs. In the same way, an algebraic set will be represented by a graph where each node is a Gröbner basis (or any other data structure used to represent systems of equations).

The structure of the system becomes thus apparent and is exploited to optimize the used representation, opening the way to a better understanding and therefore to a more efficient and better targeted manipulation. Let's remember a simple fact here: symbolic manipulation often solves the problem exactly (without approximation or compromise). Therefore, pushing the limits of applicability of these techniques to scale them can only be appreciated and will undoubtedly have a significant impact on all the areas where they apply and the list is as long as it is varied. (compilation, certification, validation, synthesis, etc.).

The Hycomes team contributes to the design of mathematical modeling languages and tools, to be used for the design of cyberphysical systems. In a nutshell, two major applications can be clearly identified: (i) our work on the structural analysis of multimode DAE systems has a sizeable impact on the techniques to be used in Modelica tools; (ii) our work on the verification of dynamical systems has an impact on the design methodology for safety-critical cyberphysical systems. These two applications are detailed below.

Mathematical modeling tools are a considerable business, with major actors such as MathWorks, with Matlab/Simulink, or Wolfram, with Mathematica. However, none of these prominent tools are suitable for the engineering of large systems. The Modelica language has been designed with this objective in mind, making the best of the advantages of DAEs to support a component-based approach. Several industries in the energy sector have adopted Modelica as their main systems engineering language.

Although multimode features have been introduced in version 3.3 of the language 62, proper tool support of multimode models is still lagging behind. The reason is not a lack of interest from tool vendors and academia, but rather that multimode DAE systems poses several fundamental difficulties, such as a proper definition of a concept of solutions for multimode DAEs, how to handle mode switchings that trigger a change of system structure, or how impulsive variables should be handled. Our work on multimode DAEs focuses on these crucial issues 27.

Thanks to our IsamDAE software 45, 44, a larger class of Modelica models are expected to be compiled and simulated correctly. This should enable industrial users to have cleaner and simpler multimode Modelica models, with dynamically changing structure of cyberphysical systems. On the longer term, our ambition is to provide efficient code-generation techniques for the Modelica language, supporting, in full generality, multimode DAE systems, with dynamically changing differentiation index, structure and dimension.

The Hycomes team also focuses on scalability problems related to the compilation and simulation of large Modelica models. Digital twins developed by industrial Modelica users in the energy sector tend to be extremely large models, with up to

The Hycomes team is working on a new generation of algorithms for the compilation of the Modelica language, that can scale up to large models. The key contributations are modular index-reduction 9 and block-triangular equation sorting algorithms, that can be applied to incomplete (rectangular) DAE systems.

In addition to well-defined operational semantics for hybrid systems, one often needs to provide formal guarantees about the behavior of some critical components of the system, or at least its main underlying logic. To do so, we are actively developing new techniques to automatically verify whether a hybrid system complies with its specifications, and/or to infer automatically the envelope within which the system behaves safely. The approaches we developed have been already successfully used to formally verify the intricate logic of the ACAS X, a mid-air collision avoidance system that advises the pilot to go upward or downward to avoid a nearby airplane which requires mixing the continuous motion of the aircraft with the discrete decisions to resolve the potential conflict 74. This challenging example is nothing but an instance of the kind of systems we are targeting: autonomous smart systems that are designed to perform sophisticated tasks with an internal tricky logic. What is even more interesting perhaps is that such techniques can be often "reverted" to actually synthesize missing components so that some property holds, effectively helping the design of such complex systems.

The expected impact of our research is to allow both better designs and better exploitation of energy production units and distribution networks, enabling large-scale energy savings. At least, this is what we could observe in the context of the FUI ModeliScale collaborative project (2018–2021), focused on electric grids, urban heat networks and building thermal modeling.

The rationale is as follows: system engineering models are meant to assess the correctness, safety and optimality of a system under design. However, system models are still useful after the system has been put in operation. This is especially true in the energy sector, where systems have an extremely long lifespan (for instance, more than 50 years for some nuclear power plants) and are upgraded periodically, to integrate new technologies. Exactly like in software engineering, where a software and its model co-evolve throughout the lifespan of the software, a co-evolution of the system and its physical models has to be maintained. This is required in order to maintain the safety of the system, but also its optimality.

Moreover, physical models can be instrumental to the optimal exploitation of a system. A typical example are model-predictive control (MPC) techniques, where the model is simulated, during the exploitation of the system, in order to predict system trajectories up to a bounded-time horizon. Optimal control inputs can then be computed by mathematical programming methods, possibly using multiple simulation results. This has been proved to be a practical solution 64, whenever classical optimal control methods are ineffective, for instance, when the system is non-linear or discontinuous. However, this requires the generation of high-performance simulation code, capable of simulating a system much faster than real-time.

The structural analysis techniques implemented in IsamDAE 45 generate a conditional block dependency graph, that can be used to generate high-performance simulation code : static code can be generated for each block of equations, and a scheduling of these blocks can be computed, at runtime, at each mode switching, thanks to an inexpensive topological sort algorithm. Contrarily to other approaches (such as 63), no structural analysis, block-triangular decompositions, or automatic differentiation has to be performed at runtime.

The most notable result of the Hycomes team, for 2023, is the design and implementation of a modular structural analysis algorithm for multimode DAE systems, that can scale up to systems with the order

Modeling languages and tools based on Differential Algebraic Equations (DAE) bring several specific issues that do not exist with modeling languages based on Ordinary Differential Equations. The main problem is the determination of the differentiation index and latent equations. Prior to generating simulation code and calling solvers, the compilation of a model requires a structural analysis step, which reduces the differentiation index to a level acceptable by numerical solvers.

The Modelica language, among others, allows hybrid models with multiple modes, mode-dependent dynamics and state-dependent mode switching. These Multimode DAE (mDAE) systems are much harder to deal with. The main difficulties are (i) the combinatorial explosion of the number of modes, and (ii) the correct handling of mode switchings.

The IsamDAE software aims at providing a compilation chain for mDAE-based modeling languages that make it possible to efficiently generate correct simulation code for multimode models. Novel structural analysis methods for mDAE systems were designed and implemented, based on an implicit representation of the varying structure of such systems. Several standard algorithms, such as J. Pryce's Sigma-method and the Dulmage-Mendelsohn decomposition, were adapted to the multimode case, using Binary Decision Diagrams (BDD) to represent the mode-dependent structure of an mDAE system.

IsamDAE determines, as a function of the mode, the set of latent equations, the leading variables and the state vector. This is then used to compute a conditional dependency graph (CDG) of the system, that can be used to generate simulation code with a mode-dependent scheduling of the blocks of equations. The software is also fit for generating simulation code for the hybrid dynamical system simulation tool Siconos, as well as handling the structural analysis of the multimode consistent initialization problem associated with an mDAE system.

IsamDAE (Implicit Structural Analysis of Multimode DAE systems) is a software library implementing new structural analysis methods for multimode DAE systems, based on an implicit representation of incidence graphs, matchings between equations and variables, and block decompositions. The input of the software is a variable dimension multimode DAE system consisting in a set of guarded equations and guarded variable declarations. It computes a mode-dependent structural index reduction of the multimode system and is able to produce a mode-dependent graph for the scheduling of blocks of equations in long modes, check the structural nonsingularity of the associated consistent initialization problem, or generate simulation code for the nonsmooth dynamical system simulation tool Siconos.

IsamDAE is coded in OCaml, and uses the following packages: GuaCaml by Joan Thibault, MLBDD by Arlen Cox, Menhir by François Pottier and Yann Régis-Gianas, Pprint by François Pottier, Snowflake by Joan Thibault, XML-Light by Nicolas Cannasse and Jacques Garrigue.

New features:

* XML representations of the structure of a multimode DAE model are accepted as inputs by the IsamDAE tool, in order to enable weak coupling with tools based on existing DAE-based languages. IsamDAE distinguishes between MEL and XML inputs based on the extension of the input file (.mel versus .mdae.xml).

Bug fixes:

* A better handling of the model structure for consistent initialization prevents subtle bugs that were observed for a few models and initial events. Specific error messages are returned when initial equations involve variables that are not active in the corresponding modes.

Performance improvement:

* Better handling of sets of equations/variables labeled with propositional formulas, thanks to an adapted data structure.

Various:

* Verbosity option -v now takes as a parameter an integer ranging from 0 ("quiet") to 5 ("deep debug"). The detailed output of CoSTreD is only available in "deep debug" mode.

Complex systems (either physical or logical) are structured and sparse, that is, they are build from individual components linked together, and any component is only linked to rather small number of other components with respects to the size of the global system.

RBTF exploits this structure, by over-approximating the relations between components as a tree (called decomposition tree in the graph literature) each node of this tree being a set of components of the initial systems. Then, starting from leaves, each sub-system is solved and the solutions are projected as a new constraints on their parents node, this process is iterated until all sub-systems are solved. This step allows to condensate all constraints and check their satisfiability. We call this step the **Forward Reduction Process** (FRP).

Finally, we can propagate all the constraints back into their initial sub-system by performing those same projections in the reverse direction. That is, each sub-system updates its set of solutions given the information from its parent then sends the information to its children sub-systems (possibly none, if its a leaf). We call this step the **Backward Propagation Process** (BPP).

A key feature of the Modelica language is its object-oriented nature: components are instances of classes and they can aggregate other components, so that extremely large models can be efficiently designed as "trees of components". However, the structural analysis of Modelica models, a necessary step for generating simulation code, often relies on the flattening of this hierarchical structure, which undermines the scalability of the language and results in widely-used Modelica tools not being able to compile and simulate such large models. This software implements a new algorithm for the modular structural analysis of Modelica models. An adaptation of Pryce's Sigma-method for non-square DAE systems, along with a carefully crafted notion of component interface, make it possible to fully exploit the object tree structure of a model. The structural analysis of a component class can be performed once and for all, only requiring the information provided by the interface of its child components. The resulting method alleviates the exponential computation costs that can be yielded by model flattening, hence, its scalability makes it ideally suited for the modeling and simulation of large cyber-physical systems.

Algorithms implemented in modularSigma are based on the Sigma-method, which reduces the DAE structural index-reduction problem to two complementary linear programs: the primal problem amounts to the computation of a maximal-weight perfect matching of the equation-variable incidence graph of the DAE, while the dual problem consists in the computation of the minimal solution of a difference bound matrix (DBM). Modularity is achieved thanks to a decomposition of both problems, using dynamic programming principles (akin to message passing techniques, that are often used in statistical estimation) and memoization of the intermediate results.

Given a semi-algebraic set S, that is a Boolean combination of equations and inequalities of polynomials, and a polynomial differential equation, we show that an algorithm can effectively decide whether S is a positive invariant set for the considered dynamic, that is, if the initial condition is in S, then the entire trajectory defined by the dynamics belongs to S.

We implemented in Mathematica two different procedures. Both require a backend algorithm for real quantifiers elimination (like the Cylindrical Algebraic Decomposition). One procedure form a monolithic request for the entire problem. The other chop the problem into small pieces following the Boolean structure of the input S.

System modeling tools are key to the engineering of safe and efficient Cyber-Physical Systems (CPS). Although ODE-based languages and tools, such as Simulink 85, are widely used in industry, there are two main reasons why DAE-based modeling is best suited to the modeling of such systems: it enables a modeling based on first principles of the physics; it is physics-agnostic, and consequently accomodates arbitrary combinations of physics (mechanics, electrokinetics, hydraulics, thermodynamics, chemical reactions, etc.).

The pioneering work by Hilding Elmqvist 61 led to the emergence of the Modelica community in the 1990s, and the DAE-based modeling language of the same name 17 has become a de facto standard, with its object-oriented nature enabling a component-based modeling style.
Its combined use with the port-Hamiltonian paradigm 91 results in a methodology that is instrumental to the scalable modeling of large systems, additionally ensuring that the model architecture
preserves the system architecture, in stark contrast to
ODE-based modeling 23, 25.

Consequently, DAE-based modeling requires that Modelica tools properly scale up to very large models. However, although Modelica enables the modeling of extremely large systems, its implementations 95, 66 are often not capable of compiling and simulating such large models. Scaling has been and still is a subject for sustained effort by the Modelica community 47, and although HPC issues belong to the landscape 37, a more specific issue is of uttermost importance for the Modelica language.

In the first steps of the compilation of a Modelica model, its hierarchical structure is flattened, thanks to a recursive syntactic inlining of the objects
composing it. See 17, Section 5.6 for a complete definition of this flattening process.
The result is an unstructured DAE that can be exponentially larger than the source model. The structural analyses that are required for the generation of simulation code (namely, the index reduction of the DAE system, followed by a block-triangular form transformation of the reduced-index system) are then performed on this monolithic DAE model.
As the compilation process does not fully take advantage of the
hierarchical nature of the models it has to handle, the modeling capabilities offered by the Modelica language are undermined by performance issues on the structural analysis itself 70, 69. Additionally, model flattening poses a challenge when attempting to extend
DAE-based modeling to higher-order modeling or dynamically changing systems 38, 40, 39.

In 9, a new modular structural analysis algorithm is proposed that takes full advantage of the object tree structure of a DAE model.
The bedrock of this method is a novel concept of structural analysis-aware interface for components.
The essence of a component interface is to capture the necessary information about a Modelica class that
needs to be exposed, in order to perform the structural analysis of a component comprizing instances of the former class, while hiding away useless information regarding the equations and all protected features it may contain.

In order to compute a component interface, one has to be able to perform the structural analysis of the possibly non-square DAE system that this component encapsulates, and to use the interfaces of the components it aggregates in this analysis.
We base our algorithm on Pryce's

Putting all of this together, it is then possible to perform a modular structural analysis, in which structural analysis is performed at the class level, and the results can then be instantiated for each component of the system model, knowing its context.
Hence, structural information at the system level is derived from composing the result of component-level analysis.
Modular structural analysis yields huge gains in terms of memory usage and computational costs, as the analysis of a single large-scale DAE is replaced with that of multiple smaller subsystems.
Moreover, the analysis is performed at the class level, meaning that a single structural analysis is needed for all system components that are instances of the same class.

A new collaboration between the Hycomes and the University of Linköping (Sweden) has started this year on the topic of system diagnosis, based on multimode DAE systems.

Fault detection and diagnosis are important for the health monitoring of physical systems. Model-based approaches for single-mode, smooth, systems is a well-established field, supported by a large body of literature covering various approaches like structural methods 33, parity space techniques, and observer-based methods 72.

While single-mode systems are often described using differential algebraic equations (DAEs), the modeling of non-smooth physical systems yields switched DAEs, also known as multimode DAEs (mDAEs), which combine continuous behaviors, defined as solutions of a set of DAE systems, with discrete mode changes 98, 27. Direct application of traditional fault diagnosis methods to all possible configurations of multi-mode systems quickly becomes intractable, as the number of modes tends to be exponential in the size of the system. The method proposed by 75 works around this issue by coupling a mode estimation algorithm with a single-mode diagnosis methodology, akin to just-in-time compilation in computer science. This approach unfortunately puts the burden on solving mode estimation problems, which often turn out to be intractable for the same reason.

Structural fault detectability and isolability is a graph-based method to evaluate diagnosability properties on DAEs 65. It is based on the Dulmage-Mendelsohn decomposition (DM), a building block of the structural analysis of equation systems. In 11, we show how its extension to multimode systems, introduced in 4, can be applied in the context of structural fault detectability and isolability of mmDAEs 68. Building upon our previous research studies, the methods presented in this paper represent advancements in diagnostic methodologies for multi-mode systems, providing novel ways to study the diagnosability of multi-mode systems without enumerating their modes.

The case study used throughout this article is a model of a reconfigurable battery system, in which switching strategies enable to produce an AC output without relying on a central inverter 19. This model is parametrized by the number of battery cells, so that both the inherent complexity associated with the diagnostics of such systems and the scalability of our approaches can be addressed.

Graphical models in probability and statistics are a core concept in the area of probabilistic reasoning and probabilistic programming-graphical models include Bayesian networks and factor graphs. For modeling and formal verification of probabilistic systems, probabilistic automata were introduced. A coherent suite of models consisting of Mixed Systems, Mixed Bayesian Networks, and Mixed Automata is proposed in 8. This framework extends factor graphs, Bayesian networks, and probabilistic automata with the handling of nondeterminism. Each of these models comes with a parallel composition, and we establish clear relations between these three models. Also, we provide a detailed comparison between Mixed Automata and Probabilistic Automata.

Hybrid systems are dynamical systems alternating between continuous-time dynamics, called modes, and nonsmooth transitions between modes.
Linear complementarity systems (LCS) form a special class of hybrid systems with an exponential number of modes and a linear differential algebraic equation in each mode. LCS are for instance used to describe mechanical and electrical systems featuring perfect contacts or ideal switches. For example, the ideal (Zener) diode is a 1-dimensional LCS with two modes: a passing mode in one direction and a blocking mode in the other direction.
While seemingly simple, little is known about the existence, and eventually uniqueness, of continuous solutions (in the state space).
The only known sufficient condition is too strong as it requires the existence and uniqueness of solutions for the underlying linear complementarity problem (LCP) which, for a fixed matrix Q-matrix when a solution exists for all holes occur only in specific locations.
We then exploited this property to fully characterize Q-matrices for

Characterizing Q-matrices for any finite dimension doesn't have a solution.
This property allowed us to reduce the spatial case to finite planar problems that we were able to enumerate and solve.
Our characterization is a program enumerating a long list of (symbolic) constraints on the entries of the matrix

Research Directions: Cyber-Physical Systems, a new open-access journal published by Cambridge University Press. He is also serving on the board of the MDPI Computation journal.

Discrete Event Dynamic Systems and the IEEE Transactions on Automatic Control journals.