# Newton-Raphson Power Flow

The Newton-Raphson algorithm is the most commonly used iterative method to solve the power flow problem.

## Derivation of the Newton-Raphson Method using Taylor Series

A nonlinear function that is differentiable can be approximated by a polynomial using a Taylor series. Let f(x) be a function of a single variable x. It is approximated about a point $x_0$ using a Taylor series as follows:

$f(x) \approx f(x_0) + (x - x_0) \frac{\partial f(x_0)}{\partial x} + \frac{1}{2}(x - x_0)^{2} \frac{\partial^{2} f(x_0)}{\partial x^{2}} + ... \,$ ... Equ. (1)

The function f(x) is said to be linearly approximated when only the first two terms are retained and if we include the second derivatives, then it said to be a quadratic approximation. Figure 1. Comparison of the original and linear approximated function

Example 1: Determine the linear approximation of the following function at a given point using a Taylor series. Plot the original function and its linear approximation over the interval $-1 \leq x \leq 6$ and compare the result.

$f(x) = 4x^{2} + 3 \,$, $x_0 = 4 \,$

The function and its derivative at a given point are as follows:

$f(x_0) = 67 \,$
$\frac{\partial f(x_0)}{\partial x} = 8x_0 = 32 \,$

The linear Taylor series approximation is:

$f(x) \approx 67 + (x - 4) \times 32 \approx 32x - 61 \,$

In Figure 1, the original and linear approximation of the function is plotted. It can be seen that the approximation is good near $x_0$, however, as we move away from the approximation point, the original function and linear approximation begin to diverge. The approximation result can be improved by adding higher order derivative terms. Try formulating the quadratic approximation equation and compare it with the original function.

In the above example, we have learned how to approximate the non-linear equation and calculate the value of a function at a given point. Similarly, linear approximations of a Taylor series can be used for calculating the solution of the non-linear equation and determine the value of x at a given f(x). In Equation 1, if we neglect the second and higher differential order term then the equation can be written as:

$f(x) = f(x_0) + (x - x_0) \frac{\partial f(x_0)}{\partial x} \,$
$f(x) - f(x_0) = (x - x_0) \frac{\partial f(x_0)}{\partial x} \,$
$x = x_0 + \left[ \frac{\partial f(x_0)}{\partial x} \right]^{-1} \left[ f(x) - f(x_0) \right] \,$ ... Equ. (2)

We have seen from Example 1 that the linear approximation function provides the result near to the actual result only if the value of x is near $x_0$, similarly, the solution x cannot be estimated correctly if the initial guess $x_0$ is very far from the actual solution. Therefore, in order to estimate the result close to the actual solution, after estimating the value x based on initial guess $x_0$, the function is again linearized around a point x and the solution is estimated again. This process is repeated until the difference between current estimation and previously estimated value is lower than a predefined tolerance. This method of iteration is called Newton-Raphson method and Equation 3 is commonly known as Newton-Raphson equation.

$x_{k+1} = x_{k} + \left[ \frac{\partial f(x_k)}{\partial x} \right]^{-1} \left[ f(x) - f(x_k) \right] \,$ ... Equ. (3)

Here, k = 0, 1, 2… is a number of the current iteration.

Example 2: Determine the value x at which f(x) = 103 using Newton-Raphson method. Use the same function defined in Example 1 with initial guess of $x_0$ = -1.

The function and its first derivative are as follows:

$f(x) = 4x_{k}^{2} + 3 = 103 \,$
$\frac{\partial f(x)}{\partial x} = 8x \,$

Thus, the Newton-Raphson equation becomes:

$x_{k+1} = x_{k} + \frac{(103 - 4x^{2} - 3)}{8 x_k} \,$

Starting from k = 0, we can estimate the solution x for k+1 until the error term $(x_{k+1} - x_k) \lt 0.00001 \,$.

The values of the first 8 iterations are as follows:

$x_0 = -1 \,$
$x_1 = -13 \,$
$x_2 = -7.46154 \,$
$x_3 = -5.40603 \,$
$x_4 = -5.01525 \,$
$x_5 = -5.00002 \,$
$x_6 = -5.00000 \,$
$x_7 = -5.00000 \,$

It is noticeable that the solution has converged after iteration k = 5 to the value x = -5.00 and the difference between the solution of two iteration onward is lower than the tolerance. The solution can be verified by calculating the function value at x = -5.00.

Generally, non-linear equations have more than one solution but the Newton-Raphson method provides only one solution that is close to the starting point. In order to get the other solution, the initial value must be changed to different value.

In the same example, if we change the initial value to x0 = 1.00, the solution will converge in the same number of iteration but in opposite direction and provide second solution i.e. x = 5.0. The Newton-Raphson method converges quickly when the starting point is close to the solution. However, in some circumstances, the method is known to diverge or provide no information that the solution exists. In the example above, if f(x) = 0, then the Newton-Raphson method will not converge.

Traditionally, the Newton-Raphson equation is formulated to find the root of a function, i.e. solving for x when $f(x) = 0 \,$. Hence the Newton-Raphson iteration in Equation (3) reduces to:

$x_{k+1} = x_{k} - \left[ \frac{\partial f(x_k)}{\partial x} \right]^{-1} f(x_k) \,$ ... Equ. (4)

## Multi-dimensional Newton-Raphson Method

In the derivation above, the Newton-Raphson method is used to find the root of a single equation $f(x) \,$. In order to simultaneously solve the roots for a set of n equations $\boldsymbol{F}(\boldsymbol{x})$, the Newton-Raphson iteration in Equation (4) can be extended to the following matrix form:

$\boldsymbol{x}_{k+1} = \boldsymbol{x}_{k} - [J]^{-1} \boldsymbol{F}(\boldsymbol{x}_{k}) \,$ ... Equ. (5)

where $\boldsymbol{x}$ is an n x 1 vector of roots to the set of functions $\boldsymbol{F}$

$[J]=\begin{bmatrix} \frac{\partial F_1}{\partial x_1} & \cdots & \frac{\partial F_1}{\partial x_n} \\ \vdots & \ddots & \vdots \\ \frac{\partial F_m}{\partial x_1} & \cdots & \frac{\partial F_m}{\partial x_n} \end{bmatrix}$ is the n x n Jacobian matrix

## Application to the Power Flow Problem

When applied to the power flow problem, the Newton-Raphson algorithm is used to solve for a vector of bus voltage magnitudes and angles, i.e.

$\boldsymbol{x} = \begin{bmatrix}\boldsymbol{\delta} \\ \boldsymbol{|V|} \end{bmatrix}$

Things to note:

(1) The voltage and angle at the swing bus is specified (i.e. known a priori) and is therefore not part of the solution vector $\boldsymbol{x}$.
(2) Bus voltage magnitudes at PV buses are specified and therefore not part of $\boldsymbol{|V|}$. Only the bus voltage magnitudes of PQ buses are included.
(3) $\boldsymbol{\delta}$ includes all PQ and PV buses.

Therefore, the vector $\boldsymbol{x}$ is an m x 1 vector with:

$m = 2n_{PQ} + n_{PV} \,$

where $n_{PQ} \,$ is the number of PQ buses

$n_{PV} \,$ is the number of PV buses

Note that buses in the network must be ordered such that the PQ buses are first, followed by the PV buses.

### Basic Algorithm

The basic Newton-Raphson iteration is as follows:

$\begin{bmatrix}\boldsymbol{\delta}_{k+1} \\ \boldsymbol{|V|}_{k+1} \end{bmatrix} = \begin{bmatrix}\boldsymbol{\delta}_{k} \\ \boldsymbol{|V|}_{k} \end{bmatrix} -[J]^{-1} \begin{bmatrix}\Delta \boldsymbol{P}_{k} \\ \Delta \boldsymbol{Q}_{k} \end{bmatrix}$

where $\boldsymbol{\delta}_{k}$ is the vector of bus voltage angles at the k-th iteration

$\boldsymbol{|V|}_{k}$ is the vector of bus voltage magnitudes at the k-th iteration
$\Delta \boldsymbol{P}_{k} = \boldsymbol{P}_{spec} - \boldsymbol{P}_{calc}(\boldsymbol{\delta}_{k},\boldsymbol{|V|}_{k})$ is the vector of mismatches between the specified and calculated bus active power injections (with calculated injections computed using bus voltage magnitudes and angles at the k-th iteration)
$\Delta \boldsymbol{Q}_{k} = \boldsymbol{Q}_{spec} - \boldsymbol{Q}_{calc}(\boldsymbol{\delta}_{k},\boldsymbol{|V|}_{k})$ is the vector of mismatches between the specified and calculated bus reactive power injections (with calculated injections computed using bus voltage magnitudes and angles at the k-th iteration)

### Jacobian Matrix

By convention, the Jacobian matrix is set up as a partitioned matrix of the form:

$[J] = \left[ \begin{matrix} \frac{\partial \boldsymbol{P}}{\partial \boldsymbol{\delta}} & | & \frac{\partial \boldsymbol{P}}{\partial \boldsymbol{V}} \\ ---- & | & ---- \\ \frac{\partial \boldsymbol{Q}}{\partial \boldsymbol{\delta}} & | & \frac{\partial \boldsymbol{Q}}{\partial \boldsymbol{V}} \end{matrix} \right] \,$

Where each term is a sub-matrix of the form (using $\frac{\partial \boldsymbol{P}}{\partial \boldsymbol{\delta}}$ as an example):

$\frac{\partial \boldsymbol{P}}{\partial \boldsymbol{\delta}} = \left[ \begin{matrix} \frac{\partial P_{1}}{\partial \delta_{1}} & \frac{\partial P_{1}}{\partial \delta_{2}} & \cdots & \frac{\partial P_{1}}{\partial \delta_{n}} \\ \frac{\partial P_{2}}{\partial \delta_{1}} & \ddots & & \vdots \\ \vdots & & \ddots & \vdots \\ \frac{\partial P_{m}}{\partial \delta_{1}} & \cdots & \cdots & \frac{\partial P_{m}}{\partial \delta_{n}} \end{matrix} \right] \,$

In rectangular form, the terms in the sub-matrices are real numbers calculated by the following expressions:

(i) Sub-matrix $\frac{\partial \boldsymbol{P}}{\partial \boldsymbol{\delta}}$:

$\frac{\partial P_{i}}{\partial \delta_{i}} = \sum_{k=1, k \neq i}^{n} |V_{i}| |V_{k}| \left[ B_{ik} \cos(\delta_{i} - \delta_{k}) - G_{ik} \sin(\delta_{i} - \delta_{k}) \right] = -Q_{i} - B_{ii} V_{i}^{2}$
$\frac{\partial P_{i}}{\partial \delta_{k}} = |V_{i}| |V_{k}| \left[ G_{ik} \sin(\delta_{i} - \delta_{k}) - B_{ik} \cos(\delta_{i} - \delta_{k}) \right]$

(ii) Sub-matrix $\frac{\partial \boldsymbol{Q}}{\partial \boldsymbol{\delta}}$:

$\frac{\partial Q_{i}}{\partial \delta_{i}} = \sum_{k=1, k \neq i}^{n} |V_{i}| |V_{k}| \left[ G_{ik} \cos(\delta_{i} - \delta_{k}) + B_{ik} \sin(\delta_{i} - \delta_{k}) \right] = P_{i} - G_{ii} V_{i}^{2}$
$\frac{\partial Q_{i}}{\partial \delta_{k}} = - |V_{i}| |V_{k}| \left[ G_{ik} \cos(\delta_{i} - \delta_{k}) + B_{ik} \sin(\delta_{i} - \delta_{k}) \right]$

(iii) Sub-matrix $\frac{\partial \boldsymbol{P}}{\partial \boldsymbol{|V|}}$:

$V_{i} \frac{\partial P_{i}}{\partial |V_{i}|} = \sum_{k=1, k \neq i}^{n} |V_{i}| |V_{k}| \left[ G_{ik} \cos(\delta_{i} - \delta_{k}) + B_{ik} \sin(\delta_{i} - \delta_{k}) \right] + 2 G_{ii} |V_{i}|^{2} = P_{i} + G_{ii} V_{i}^{2}$
$V_{k} \frac{\partial P_{i}}{\partial |V_{k}|} = |V_{i}| |V_{k}| \left[ G_{ik} \cos(\delta_{i} - \delta_{k}) + B_{ik} \sin(\delta_{i} - \delta_{k}) \right]$

(iv) Sub-matrix $\frac{\partial \boldsymbol{Q}}{\partial \boldsymbol{|V|}}$:

$V_{i} \frac{\partial Q_{i}}{\partial |V_{i}|} = \sum_{k=1, k \neq i}^{n} |V_{i}| |V_{k}| \left[ G_{ik} \sin(\delta_{i} - \delta_{k}) - B_{ik} \cos(\delta_{i} - \delta_{k}) \right] - 2 B_{ii} |V_{i}|^{2} = Q_{i} - B_{ii} V_{i}^{2}$
$V_{k} \frac{\partial Q_{i}}{\partial |V_{k}|} = |V_{i}| |V_{k}| \left[ G_{ik} \sin(\delta_{i} - \delta_{k}) - B_{ik} \cos(\delta_{i} - \delta_{k}) \right]$

### Ill-conditioning

The power flow problem is said to be ill-conditioned if the Jacobian matrix is ill-conditioned. This is because in the Newton-Raphson algorithm, each iteration has the following linear form:

$[J] \Delta \boldsymbol{x} = -\Delta \boldsymbol{S} \,$

where $[J] \,$ is the power flow Jacobian matrix

$\Delta \boldsymbol{x} = \begin{bmatrix}\boldsymbol{\delta}_{k+1} \\ \boldsymbol{|V|}_{k+1} \end{bmatrix} - \begin{bmatrix}\boldsymbol{\delta}_{k} \\ \boldsymbol{|V|}_{k} \end{bmatrix} \,$ is the bus voltage (magnitude and angle) correction vector
$\Delta \boldsymbol{S} = \begin{bmatrix}\Delta \boldsymbol{P}_{k} \\ \Delta \boldsymbol{Q}_{k} \end{bmatrix} \,$ is the active and reactive power mismatch vector

Therefore, if the Jacobian matrix is ill-conditioned, the solution to the power flow iteration can become wildly unstable or divergent.

The most common characteristics that lead to ill-conditioned power flow problems are as follows:

• Heavily loaded power system (i.e. voltage stability problem where system has reached nose point of PV curve)
• Lines with high R/X ratios
• Large system with many radial lines
• Poor selection of the slack bus (e.g. in a weakly supported part of the network)

### Numerical Techniques to Improve Computation Time

Each iteration of the Newton-Raphson algorithm requires basic matrix operations to be performed, i.e. matrix arithmetic, multiplication and inversion. For large networks, the use of efficient numerical techniques can significantly improve computation times:

• Sparsity techniques: in most practical networks, the bus admittance matrix is sparse, and the Jacobian matrix has the same level of sparsity. The use of sparse matrix techniques reduces the memory storage requirements of an $n \times n \,$ admittance matrix from $n^{2} \,$ entries to $3n + 6b \,$, where b is the number of branches in the network.
Suppose a network has on average 2 branches for every bus, then the storage requirements for such a network is $15n \,$ entries. If the network had 200 buses, then the storage requirements is $n^{2} = 40,000 \,$ for the full matrix and $15n = 3,000 \,$ for the sparse matrix.
Efficient methods for solving linear equations involving sparse matrix also exist, which can also help to improve computational times.
• LU Decomposition: sometimes called triangular factorisation or LU factorisation, LU decomposition is an efficient technique for inverting the Jacobian matrix by factoring the Jacobian matrix into lower (L) and upper (U) triangular matrices and then solving the equations via forward/backward substitutions. This technique reduces the number of full matrix multiplication operations when compared to conventional matrix inversion techniques.
• Optimal ordering: the rows of the Jacobian matrix can be ordered so as to minimise the number of non-zero entries in the upper triangular matrix during LU decomposition. The ordering only needs to be done once and is generally set up so that the rows with the least number of branches are worked on first during the triangular factorisation process.
• Less frequent update of the Jacobian matrix: in the default formulation, the power flow Jacobian matrix is calculated at every iteration. However, as the algorithm gets close to the solution, the Jacobian matrix does not change significantly and computation time can be saved by only updating the Jacobian matrix less frequently, e.g. every second or third iteration.

### Techniques to Improve Convergence

The Newton-Raphson algorithm has the tendency to converge (or diverge) to an incorrect solution within several iterations, depending on such factors as the level of ill-conditioning, quality of the initial estimates, etc. Some techniques for improving convergence are listed below:

• Damping factors: the step length of each iteration can be controlled by a damping factor $h_n \,$, which limits the the size of $\Delta \boldsymbol{\theta} \,$ and $\Delta \boldsymbol{V} \,$ at the iteration, i.e.
$\Delta \boldsymbol{x} = -h_{n} [J]^{-1} \Delta \boldsymbol{S} \,$
• Use of previous solutions: by default, a "flat start" estimate is used for the initial bus voltages (i.e. 1 + j0 pu). This may be far from the actual solution and may cause the Newton-Raphson algorithm to diverge. In network augmentations and what-if scenarios, a successful solution often already exists for a base network and convergence difficulties arise for the expanded system. In such cases, the previously solved bus voltages could be used as initial estimates in lieu of a flat start.
• Use of DC load flow solution: the DC load flow is a direct linear method (i.e. non-iterative) for estimating active power flows in a network by making several assumptions (e.g. all voltage magnitudes are 1pu). Since the DC load flow always solves as long as the Ybus matrix is invertible, then the solution can be used as initial estimates for the voltage angles in the Newton-Raphson algorithm.

### Characteristics of the Newton-Raphson Algorithm

The Newton-Raphson algorithm is without doubt the most widely used method for solving power flows because of some key favourable characteristics:

• Convergence properties and accuracy: the Newton-Raphson algorithm exhibits quadratic convergence, leading to highly accurate solutions for most practical systems within 5 iterations.
• Speed: the numerical techniques (sparsity, optimal ordering, LU factorisation, etc) have made the Newton-Raphson algorithm computationally efficient and competitive against other algorithms even for large systems.
• Reliability: the algorithm is very reliable and able to solve even heavily loaded systems operating at close to voltage collapse.
• Flexibility: the basic algorithm can be readily extended to cater for tap-changing transformers, FACTS and other reactive support devices, DC systems, phase shifting transformers, etc.