The purpose of LiveSPICE is to simulate circuits for audio signals in real time. Simulating a circuit requires solving for the relationship between the current state of the circuit and a new input signal, to produce the new state of the circuit. This post describes the algorithm LiveSPICE uses to evaluate this relationship quickly enough to run in real time. My hope is that this post fills what I found to be a gap in the literature I could find on solving this kind of system. It took me several months to converge on this solution (numerical analysis pun!); maybe this approach is supposed to be obvious, but it certainly was not obvious to me.
Problem statement
To be more specific, the precise problem this post solves is to solve the system of equations produced by analyzing a circuit via modified nodal analysis (MNA) or other similar techniques. The basic result of a circuit analysis technique is a differential algebraic equation (DAE) describing the behavior of a circuit of the following form:
$$ \mathbf{A y}'(t) + \mathbf{B y}(t) + \mathbf{F}(\mathbf{y}(t)) = \mathbf{c}(t) \tag{1} $$
Note that this representation has split the algebraic part of the system into linear $ \mathbf{B y}(t) $
and explicitly non-linear $ \mathbf{F}(\mathbf{y}(t)) $
parts. This allows the system to be statically analyzed further than is possible with a single term representing both linear and non-linear algebraic equations.
Also note that this representation requires that $ \mathbf{A} $
is constant w.r.t. $ t $
, i.e. the differential terms of the system are linear. This is the case when working with the standard circuit components.
The other part of the problem we wish to solve is to be able to find these solutions very quickly. Our overall strategy to make this fast is to break the problem into two pieces of work: one piece of work can be done once at analysis-time; the other piece of work must be done at run-time for each sample we process. Our goal is to minimize the amount of work to do at run-time, which might mean moving expensive work to be done at analysis-time.
Discretizing the system
The goal of a numerical simulation of the system is to determine values for $ \mathbf{y}(t) $
satisfying (1) at some regular sampling of $ t $
. The first step in solving this problem is to eliminate the differential equations. To do this, we apply a numerical integration technique, such as the backward difference formulas (LiveSPICE uses BDF2) or the trapezoidal method:
$$ y(t) = y(t-h) + \frac{h}{2}\left(y'(t-h) + y'(t)\right) $$
This introduces a relationship between the state of the circuit at the previous timestep $ y(t-h) $
and the current timestep $ y(t) $
. To use this directly, we have to solve (1) for $ \mathbf{y}'(t) $
. It is tempting to simply solve (1):
$$ \mathbf{y}'(t) = -\mathbf{A}^{-1}\left(\mathbf{B y}(t) + \mathbf{F}(\mathbf{y}(t)) - \mathbf{c}(t)\right) \tag{2} $$
However, this solution is not valid, because $ \mathbf{A} $
is not invertible; only some of the equations in (1) are differential. Even worse, the non-zero rows of $ \mathbf{A} $
(the rows corresponding to differential equations) may not be independent (there may be more differential equations than unknown differentials). To work around this problem, consider this representation of (1):
$$ \left[\begin{array}{ccc}\mathbf{A}&\mathbf{B}&\mathbf{F}(\mathbf{y}(t))\end{array}\right] \left[\begin{array}{c}\mathbf{y}'(t)\\\mathbf{y}(t)\\1\end{array}\right]=\mathbf{c}(t) $$
Row reduction of the system above results in a new system of the form:
$$ \left[\begin{array}{ccc}\mathbf{A}_1&\mathbf{B}_1&\mathbf{F}_1(\mathbf{y}(t))\\\mathbf{0}&\mathbf{B}_2&\mathbf{F}_2(\mathbf{y}(t))\end{array}\right] \left[\begin{array}{c}\mathbf{y}'(t)\\\mathbf{y}(t)\\1\end{array}\right]=\left[\begin{array}{c}\mathbf{c}_1(t) \\\ \mathbf{c}_2(t)\end{array}\right] $$
Note that in the above, the number of equations remains the same, we’ve just split them into two groups. The first group gives us an ordinary differential equation:
$$ \mathbf{A}_1\mathbf{y}'(t)+\mathbf{B}_1\mathbf{y}(t)+\mathbf{F}_1(\mathbf{y}(t))=\mathbf{c}_1(t) $$
where $ \mathbf{A}_1 $
is not singular, meaning we can solve for $ \mathbf{y}'(t) $
similarly to (2), and apply the trapezoidal rule (or another numerical integration scheme). We also get an algebraic equation from the second group:
$$ \mathbf{B}_2\mathbf{y}(t)+\mathbf{F}_2(\mathbf{y}(t))=\mathbf{c}_2(t) $$
Combining the above algebraic equations with the equations resulting from numerically integrating $ \mathbf{y}'(t) $
yields a fully determined set of algebraic equations that can be solved for $ \mathbf{y}(t) $
.
Solving the system
At this point, we have a fully algebraic system of potentially non-linear equations (rewritten from the above two sets of equations):
$$ \mathbf{Dy}(t)+\mathbf{H}(\mathbf{y}(t))=\mathbf{0} \tag{3} $$
to which we can apply a non-linear solver such as Newton’s method. Begin the iteration with initial guess $ \mathbf{y}_0=\mathbf{y}(t-h) $
(the state of the circuit at the previous timestep), and iterate using $ \mathbf{y}_{n+1}=\mathbf{y}_n+\Delta\mathbf{y}_n $
until convergence, where $ \Delta\mathbf{y}_n $
is found by solving
$$ \mathbf{J} \Delta\mathbf{y}_n + \mathbf{D}\mathbf{y}_n+\mathbf{H}(\mathbf{y}_n)=\mathbf{0} $$
where $ \mathbf{J} $
is the Jacobian of the system in (3). For particular values of $ \mathbf{y}_n $
, this is a well defined system of linear equations that can easily be solved with any of the standard methods.
Optimizing Newton’s method for real time performance
To run the simulation in real time for non-trivial circuits requires significant optimization to the Newton’s method iteration in the previous section. The most effective way to reduce the cost of Newton’s method is to reduce the dimensionality of the problem. Our goal is to solve the system as much as possible at analysis-time, to minimize the dimensionality of the system to be solved at run-time during simulation.
To this end, observe that most of the equations in a system derived from a typical circuit are linear, even when being evaluated at analysis-time. We can rearrange the system such that we can solve for the linear solutions at analysis-time (as a symbolic expression of the non-linear solutions), leaving a lower rank system to be solved during simulation. To do this, rearrange the columns of the Jacobian matrix $ \mathbf{J} $
(and the corresponding rows of $ \mathbf{y}(t) $
) such that the entries of $ \mathbf{J} $
that contain non-linear terms appear in the last columns:
$$ \mathbf{J}=\left[\begin{array}{cc}\mathbf{A}&\mathbf{B}(\mathbf{y}(t))\end{array}\right] $$
Row reduction gives:
$$ \mathbf{J}_{opt}=\left[\begin{array}{cc}\mathbf{A}_1&\mathbf{B}_1(\mathbf{y}(t))\\\mathbf{0}&\mathbf{B}_2(\mathbf{y}(t))\end{array}\right] $$
To restate the observation more precisely, note that only the solutions for $ \Delta\mathbf{y}_n $
described by $ \mathbf{B}_2 $
need to be solved for numerically during simulation. $ \mathbf{A}_1 $
can be reduced to upper triangular form at analysis-time, so the corresponding solutions for $ \Delta\mathbf{y}_n $
can be directly solved for in terms of the non-linear solutions via back-substitution.
This optimization works by moving variables from an iterative solver with steps that are $ O(n^3) $
(Newton’s method) to a closed form solution that is $ O(n^2) $
(back-substitution).
To quickly illustrate the potential of this optimization, I observed a 33x speedup on a particular amplifier circuit, with practically identical numerical results. On my machine, this brings the performance from well below real time to nearly 7 times faster than real time. In other words, without this optimization, the simulation could not be run in real time.
Numerical stability
The naive run-time only approach to evaluating Newton’s method will usually involve numerically robust techniques. For example, if Gaussian elimination is used to solve each iteration step, partial or full pivoting can be used. We cannot be as careful when rearranging the system of equations during analysis for performance optimization, because selecting the largest pivots and selecting pivots corresponding to linear variables are competing goals. In addition, the candidate pivots are often symbolic expressions for which we cannot compute the magnitude. It is possible that by applying the rank reduction optimization to the Newton’s method system at circuit analysis-time, Gaussian elimination uses a pivot that would not have been the maximum pivot found at runtime, resulting in a less numerically stable solution. While this is a concern, I have not yet found a test circuit that produces a numerically unstable simulation when it should be stable. Unfortunately, I don’t have much more I can say about this issue.
Conclusion
This is a relatively simple technique that works well for general circuits containing the usual set of components, and produces highly efficient numerical solutions that can be run in real time at audio sampling rates. It is probably also useful for other system simulation problems. I hope someone out there found this useful! Please don’t hesitate to let me know if you find a mistake, or something is unclear. The implementation of this method in LiveSPICE can be found in TransientSolution.cs.