Section: Research Program
Structural Analysis of DAE Systems
The Modelica language is based on Differential Algebraic Equations (DAE). The general form of a DAE is given by:
where is a system of equations and is a finite list of independent real-valued, smooth enough, functions of the independent variable . We use as a shorthand for the list of first-order time derivatives of , . High-order derivatives are recursively defined as usual, and denotes the list formed by the -th derivatives of the functions . Each depends on the scalar and some of the functions as well as a finite number of their derivatives.
Let denote the highest differentiation order of variable effectively appearing in equation , or if does not appear in . The leading variables of are the variables in the set
The state variables of are the variables in the set
A leading variable is said to be algebraic if (in which case, neither nor any of its derivatives are state variables). In the sequel, and denote the leading and state variables of , respectively.
DAE are a strict generalization of ordinary differential equations (ODE), in the sense that it may not be immediate to rewrite a DAE as an explicit ODE of the form . The reason is that this transformation relies on the Implicit Function Theorem, requiring that the Jacobian matrix have full rank. This is, in general, not the case for a DAE. Simple examples, like the two-dimensional fixed-length pendulum in Cartesian coordinates [55], exhibit this behaviour.
For a square DAE of dimension (i.e., we now assume ) to be solved in the neighborhood of some , one needs to find a set of non-negative integers such that system
can locally be made explicit, i.e., the Jacobian matrix of with respect to its leading variables, evaluated at , is nonsingular. The smallest possible value of for a set that satisfies this property is the differentiation index [32] of , that is, the minimal number of time differentiations of all or part of the equations required to get an ODE.
In practice, the problem of automatically finding a ”minimal” solution to this problem quickly becomes intractable. Moreover, the differentiation index may depend on the value of . This is why, in lieu of numerical nonsingularity, one is interested in the structural nonsingularity of the Jacobian matrix, i.e., its almost certain nonsingularity when its nonzero entries vary over some neighborhood. In this framework, the structural analysis (SA) of a DAE returns, when successful, values of the that are independent from a given value of .
A renowned method for the SA of DAE is the Pantelides method; however, Pryce's -method is introduced also in what follows, as it is a crucial tool for our works.
Pantelides method
In 1988, Pantelides proposed what is probably the most well-known SA method for DAE [55]. The leading idea of his work is that the structural representation of a DAE can be condensed into a bipartite graph whose left nodes (resp. right nodes) represent the equations (resp. the variables), and in which an edge exists if and only if the variable occurs in the equation.
By detecting specific subsets of the nodes, called Minimally Structurally Singular (MSS) subsets, the Pantelides method iteratively differentiates part of the equations until a perfect matching between the equations and the leading variables is found. One can easily prove that this is a necessary and sufficient condition for the structural nonsingularity of the system.
The main reason why the Pantelides method is not used in our work is that it cannot efficiently be adapted to multimode DAE (mDAE). As a matter of fact, the adjacency graph of a mDAE has both its nodes and edges parametrized by the subset of modes in which they are active; this, in turn, requires that a parametrized Pantelides method must branch every time no mode-independent MSS is found, ultimately resulting, in the worst case, in the enumeration of modes.
Pryce's -method
Albeit less renowned that the Pantelides method, Pryce's -method [56] is an efficient SA method for DAE, whose equivalence to the Pantelides method has been proved by the author. This method consists in solving two successive problems, denoted by primal and dual, relying on the -matrix, or signature matrix, of the DAE .
This matrix is given by:
where is equal to the greatest integer such that appears in , or if variable does not appear in . It is the adjacency matrix of a weighted bipartite graph, with structure similar to the graph considered in the Pantelides method, but whose edges are weighted by the highest differentiation orders. The entries denote non-existent edges.
The primal problem consists in finding a maximum-weight perfect matching (MWPM) in the weighted adjacency graph. This is actually an assignment problem, for the solving of which several standard algorithms exist, such as the push-relabel algorithm [44] or the Edmonds-Karp algorithm [43] to only give a few. However, none of these algorithms are easily parametrizable, even for applications to mDAE systems with a fixed number of variables.
The dual problem consists in finding the component-wise minimal solution to a given linear programming problem, defined as the dual of the aforementioned assignment problem. This is performed by means of a fixpoint iteration (FPI) that makes use of the MWPM found as a solution to the primal problem, described by the set of tuples :
From the results proved by Pryce in [56], it is known that the above algorithm terminates if and only if it is provided a MWPM, and that the values it returns are independent of the choice of a MWPM whenever there exist several such matchings. In particular, a direct corollary is that the -method succeeds as long as a perfect matching can be found between equations and variables.
Another important result is that, if the Pantelides method succeeds for a given DAE , then the -method also succeeds for and the values it returns for are exactly the differentiation indices for the equations that are returned by the Pantelides method. As for the values of the , being given by , they are the differentiation indices of the leading variables in .
Working with this method is natural for our works, since the algorithm for solving the dual problem is easily parametrizable for dealing with multimode systems, as shown in our recent paper [31].
Block triangular decomposition
Once structural analysis has been performed, system can be regarded, for the needs of numerical solving, as an algebraic system with unknowns , . As such, (inter)dependencies between its equations must be taken into account in order to put it into block triangular form (BTF). Three steps are required:
-
the dependency graph of system is generated, by taking into account the perfect matching between equations and unknowns ;
-
the strongly connected components (SCC) in this graph are determined: these will be the equation blocks that have to be solved;
-
the block dependency graph is constructed as the condensation of the dependency graph, from the knowledge of the SCC; a BTF of system can be made explicit from this graph.