EN FR
EN FR


Section: Research Program

Structural Analysis of DAE Systems

The Modelica language is based on Differential Algebraic Equations (DAE). The general form of a DAE is given by:

F ( t , x , x ' , x ' ' , ) (1)

where F is a system of ne equations {f1,,fne} and x is a finite list of nv independent real-valued, smooth enough, functions {x1,,xnv} of the independent variable t. We use x' as a shorthand for the list of first-order time derivatives of xj, j=1,,nv. High-order derivatives are recursively defined as usual, and x(k) denotes the list formed by the k-th derivatives of the functions xj. Each fi depends on the scalar t and some of the functions xj as well as a finite number of their derivatives.

Let σi,j denote the highest differentiation order of variable xj effectively appearing in equation fi, or - if xj does not appear in fi. The leading variables of F are the variables in the set

x j ( σ j ) σ j = max i σ i , j

The state variables of F are the variables in the set

x j ( ν j ) 0 ν j < max i σ i , j

A leading variable xj(σj) is said to be algebraic if σj=0 (in which case, neither xj nor any of its derivatives are state variables). In the sequel, v and u denote the leading and state variables of F, respectively.

DAE are a strict generalization of ordinary differential equations (ODE), in the sense that it may not be immediate to rewrite a DAE as an explicit ODE of the form v=G(u). The reason is that this transformation relies on the Implicit Function Theorem, requiring that the Jacobian matrix Fv have full rank. This is, in general, not the case for a DAE. Simple examples, like the two-dimensional fixed-length pendulum in Cartesian coordinates  [55], exhibit this behaviour.

For a square DAE of dimension n (i.e., we now assume ne=nv=n) to be solved in the neighborhood of some (v*,u*), one needs to find a set of non-negative integers C={c1,,cn} such that system

F ( C ) = { f 1 ( c 1 ) , , f n ( c n ) }

can locally be made explicit, i.e., the Jacobian matrix of F(C) with respect to its leading variables, evaluated at (v*,u*), is nonsingular. The smallest possible value of maxici for a set C that satisfies this property is the differentiation index  [32] of F, that is, the minimal number of time differentiations of all or part of the equations fi required to get an ODE.

In practice, the problem of automatically finding a ”minimal” solution C to this problem quickly becomes intractable. Moreover, the differentiation index may depend on the value of (v*,u*). This is why, in lieu of numerical nonsingularity, one is interested in the structural nonsingularity of the Jacobian matrix, i.e., its almost certain nonsingularity when its nonzero entries vary over some neighborhood. In this framework, the structural analysis (SA) of a DAE returns, when successful, values of the ci that are independent from a given value of (v*,u*).

A renowned method for the SA of DAE is the Pantelides method; however, Pryce's Σ-method is introduced also in what follows, as it is a crucial tool for our works.

Pantelides method

In 1988, Pantelides proposed what is probably the most well-known SA method for DAE  [55]. The leading idea of his work is that the structural representation of a DAE can be condensed into a bipartite graph whose left nodes (resp. right nodes) represent the equations (resp. the variables), and in which an edge exists if and only if the variable occurs in the equation.

By detecting specific subsets of the nodes, called Minimally Structurally Singular (MSS) subsets, the Pantelides method iteratively differentiates part of the equations until a perfect matching between the equations and the leading variables is found. One can easily prove that this is a necessary and sufficient condition for the structural nonsingularity of the system.

The main reason why the Pantelides method is not used in our work is that it cannot efficiently be adapted to multimode DAE (mDAE). As a matter of fact, the adjacency graph of a mDAE has both its nodes and edges parametrized by the subset of modes in which they are active; this, in turn, requires that a parametrized Pantelides method must branch every time no mode-independent MSS is found, ultimately resulting, in the worst case, in the enumeration of modes.

Pryce's Σ-method

Albeit less renowned that the Pantelides method, Pryce's Σ-method  [56] is an efficient SA method for DAE, whose equivalence to the Pantelides method has been proved by the author. This method consists in solving two successive problems, denoted by primal and dual, relying on the Σ-matrix, or signature matrix, of the DAE F.

This matrix is given by:

Σ = ( σ i j ) 1 i , j n (2)

where σij is equal to the greatest integer k such that xj(k) appears in fi, or - if variable xj does not appear in fi. It is the adjacency matrix of a weighted bipartite graph, with structure similar to the graph considered in the Pantelides method, but whose edges are weighted by the highest differentiation orders. The - entries denote non-existent edges.

The primal problem consists in finding a maximum-weight perfect matching (MWPM) in the weighted adjacency graph. This is actually an assignment problem, for the solving of which several standard algorithms exist, such as the push-relabel algorithm  [44] or the Edmonds-Karp algorithm  [43] to only give a few. However, none of these algorithms are easily parametrizable, even for applications to mDAE systems with a fixed number of variables.

The dual problem consists in finding the component-wise minimal solution (C,D)=({c1,,cn},{d1,,dn}) to a given linear programming problem, defined as the dual of the aforementioned assignment problem. This is performed by means of a fixpoint iteration (FPI) that makes use of the MWPM found as a solution to the primal problem, described by the set of tuples {(i,ji)}i{1,,n}:

  1. Initialize {c1,,cn} to the zero vector.

  2. For every j{1,,n},

    d j max i ( σ i j + c i )
  3. For every i{1,,n},

    c i d j i - σ i , j i
  4. Repeat Steps 2 and 3 until convergence is reached.

From the results proved by Pryce in  [56], it is known that the above algorithm terminates if and only if it is provided a MWPM, and that the values it returns are independent of the choice of a MWPM whenever there exist several such matchings. In particular, a direct corollary is that the Σ-method succeeds as long as a perfect matching can be found between equations and variables.

Another important result is that, if the Pantelides method succeeds for a given DAE F, then the Σ-method also succeeds for F and the values it returns for C are exactly the differentiation indices for the equations that are returned by the Pantelides method. As for the values of the dj, being given by dj=maxi(σij+ci), they are the differentiation indices of the leading variables in F(C).

Working with this method is natural for our works, since the algorithm for solving the dual problem is easily parametrizable for dealing with multimode systems, as shown in our recent paper  [31].

Block triangular decomposition

Once structural analysis has been performed, system F(C) can be regarded, for the needs of numerical solving, as an algebraic system with unknowns xj(dj), j=1n. As such, (inter)dependencies between its equations must be taken into account in order to put it into block triangular form (BTF). Three steps are required:

  1. the dependency graph of system F(C) is generated, by taking into account the perfect matching between equations fi(ci) and unknowns xj(dj);

  2. the strongly connected components (SCC) in this graph are determined: these will be the equation blocks that have to be solved;

  3. the block dependency graph is constructed as the condensation of the dependency graph, from the knowledge of the SCC; a BTF of system F(C) can be made explicit from this graph.