Section: Research Program
Structural Analysis of DAE Systems
The Modelica language is based on Differential Algebraic Equations (DAE). The general form of a DAE is given by:
where $F$ is a system of ${n}_{e}$ equations $\{{f}_{1},\cdots ,{f}_{{n}_{e}}\}$ and $x$ is a finite list of ${n}_{v}$ independent realvalued, smooth enough, functions $\{{x}_{1},\cdots ,{x}_{{n}_{v}}\}$ of the independent variable $t$. We use ${x}^{\text{'}}$ as a shorthand for the list of firstorder time derivatives of ${x}_{j}$, $j=1,\cdots ,{n}_{v}$. Highorder derivatives are recursively defined as usual, and ${x}^{\left(k\right)}$ denotes the list formed by the $k$th derivatives of the functions ${x}_{j}$. Each ${f}_{i}$ depends on the scalar $t$ and some of the functions ${x}_{j}$ as well as a finite number of their derivatives.
Let ${\sigma}_{i,j}$ denote the highest differentiation order of variable ${x}_{j}$ effectively appearing in equation ${f}_{i}$, or $\infty $ if ${x}_{j}$ does not appear in ${f}_{i}$. The leading variables of $F$ are the variables in the set
The state variables of $F$ are the variables in the set
A leading variable ${x}_{j}^{\left({\sigma}_{j}\right)}$ is said to be algebraic if ${\sigma}_{j}=0$ (in which case, neither ${x}_{j}$ nor any of its derivatives are state variables). In the sequel, $v$ and $u$ denote the leading and state variables of $F$, respectively.
DAE are a strict generalization of ordinary differential equations (ODE), in the sense that it may not be immediate to rewrite a DAE as an explicit ODE of the form $v=G\left(u\right)$. The reason is that this transformation relies on the Implicit Function Theorem, requiring that the Jacobian matrix $\frac{\partial F}{\partial v}$ have full rank. This is, in general, not the case for a DAE. Simple examples, like the twodimensional fixedlength pendulum in Cartesian coordinates [55], exhibit this behaviour.
For a square DAE of dimension $n$ (i.e., we now assume ${n}_{e}={n}_{v}=n$) to be solved in the neighborhood of some $({v}^{*},{u}^{*})$, one needs to find a set of nonnegative integers $C=\{{c}_{1},\cdots ,{c}_{n}\}$ such that system
can locally be made explicit, i.e., the Jacobian matrix of ${F}^{\left(C\right)}$ with respect to its leading variables, evaluated at $({v}^{*},{u}^{*})$, is nonsingular. The smallest possible value of ${max}_{i}{c}_{i}$ for a set $C$ that satisfies this property is the differentiation index [32] of $F$, that is, the minimal number of time differentiations of all or part of the equations ${f}_{i}$ required to get an ODE.
In practice, the problem of automatically finding a ”minimal” solution $C$ to this problem quickly becomes intractable. Moreover, the differentiation index may depend on the value of $({v}^{*},{u}^{*})$. This is why, in lieu of numerical nonsingularity, one is interested in the structural nonsingularity of the Jacobian matrix, i.e., its almost certain nonsingularity when its nonzero entries vary over some neighborhood. In this framework, the structural analysis (SA) of a DAE returns, when successful, values of the ${c}_{i}$ that are independent from a given value of $({v}^{*},{u}^{*})$.
A renowned method for the SA of DAE is the Pantelides method; however, Pryce's $\Sigma $method is introduced also in what follows, as it is a crucial tool for our works.
Pantelides method
In 1988, Pantelides proposed what is probably the most wellknown SA method for DAE [55]. The leading idea of his work is that the structural representation of a DAE can be condensed into a bipartite graph whose left nodes (resp. right nodes) represent the equations (resp. the variables), and in which an edge exists if and only if the variable occurs in the equation.
By detecting specific subsets of the nodes, called Minimally Structurally Singular (MSS) subsets, the Pantelides method iteratively differentiates part of the equations until a perfect matching between the equations and the leading variables is found. One can easily prove that this is a necessary and sufficient condition for the structural nonsingularity of the system.
The main reason why the Pantelides method is not used in our work is that it cannot efficiently be adapted to multimode DAE (mDAE). As a matter of fact, the adjacency graph of a mDAE has both its nodes and edges parametrized by the subset of modes in which they are active; this, in turn, requires that a parametrized Pantelides method must branch every time no modeindependent MSS is found, ultimately resulting, in the worst case, in the enumeration of modes.
Pryce's $\Sigma $method
Albeit less renowned that the Pantelides method, Pryce's $\Sigma $method [56] is an efficient SA method for DAE, whose equivalence to the Pantelides method has been proved by the author. This method consists in solving two successive problems, denoted by primal and dual, relying on the $\Sigma $matrix, or signature matrix, of the DAE $F$.
This matrix is given by:
where ${\sigma}_{ij}$ is equal to the greatest integer $k$ such that ${x}_{j}^{\left(k\right)}$ appears in ${f}_{i}$, or $\infty $ if variable ${x}_{j}$ does not appear in ${f}_{i}$. It is the adjacency matrix of a weighted bipartite graph, with structure similar to the graph considered in the Pantelides method, but whose edges are weighted by the highest differentiation orders. The $\infty $ entries denote nonexistent edges.
The primal problem consists in finding a maximumweight perfect matching (MWPM) in the weighted adjacency graph. This is actually an assignment problem, for the solving of which several standard algorithms exist, such as the pushrelabel algorithm [44] or the EdmondsKarp algorithm [43] to only give a few. However, none of these algorithms are easily parametrizable, even for applications to mDAE systems with a fixed number of variables.
The dual problem consists in finding the componentwise minimal solution $(C,D)=(\{{c}_{1},\cdots ,{c}_{n}\},\{{d}_{1},\cdots ,{d}_{n}\})$ to a given linear programming problem, defined as the dual of the aforementioned assignment problem. This is performed by means of a fixpoint iteration (FPI) that makes use of the MWPM found as a solution to the primal problem, described by the set of tuples ${\left\{(i,{j}_{i})\right\}}_{i\in \{1,\cdots ,n\}}$:

Initialize $\{{c}_{1},\cdots ,{c}_{n}\}$ to the zero vector.

For every $j\in \{1,\cdots ,n\}$,
${d}_{j}\leftarrow \underset{i}{max}({\sigma}_{ij}+{c}_{i})$ 
For every $i\in \{1,\cdots ,n\}$,
${c}_{i}\leftarrow {d}_{{j}_{i}}{\sigma}_{i,{j}_{i}}$
From the results proved by Pryce in [56], it is known that the above algorithm terminates if and only if it is provided a MWPM, and that the values it returns are independent of the choice of a MWPM whenever there exist several such matchings. In particular, a direct corollary is that the $\Sigma $method succeeds as long as a perfect matching can be found between equations and variables.
Another important result is that, if the Pantelides method succeeds for a given DAE $F$, then the $\Sigma $method also succeeds for $F$ and the values it returns for $C$ are exactly the differentiation indices for the equations that are returned by the Pantelides method. As for the values of the ${d}_{j}$, being given by ${d}_{j}={max}_{i}({\sigma}_{ij}+{c}_{i})$, they are the differentiation indices of the leading variables in ${F}^{\left(C\right)}$.
Working with this method is natural for our works, since the algorithm for solving the dual problem is easily parametrizable for dealing with multimode systems, as shown in our recent paper [31].
Block triangular decomposition
Once structural analysis has been performed, system ${F}^{\left(C\right)}$ can be regarded, for the needs of numerical solving, as an algebraic system with unknowns ${x}_{j}^{\left({d}_{j}\right)}$, $j=1\cdots n$. As such, (inter)dependencies between its equations must be taken into account in order to put it into block triangular form (BTF). Three steps are required:

the dependency graph of system ${F}^{\left(C\right)}$ is generated, by taking into account the perfect matching between equations ${f}_{i}^{\left({c}_{i}\right)}$ and unknowns ${x}_{j}^{\left({d}_{j}\right)}$;

the strongly connected components (SCC) in this graph are determined: these will be the equation blocks that have to be solved;

the block dependency graph is constructed as the condensation of the dependency graph, from the knowledge of the SCC; a BTF of system ${F}^{\left(C\right)}$ can be made explicit from this graph.