Skip to content

Implicit Algorithms

Stability conditions are often related to the CFL number, and stability analysis of explicit algorithms usually require us to set a limit to small CFL

aΔxΔt1 \left| \frac{a \Delta x}{\Delta t} \right| \leq 1

For a given grid, this effectively sets a limit on the time step. The time step can be no larger than the grid spacing divided by the propagation constant ΔtΔx/a \Delta t \leq \Delta x / |a| . As an example, say we have a device of interest that is 1 meter long, and we want a grid resolution such that we can resolve features that are 1mm in size. Say the problem domain is flow over an airfoil

cs1000m/s c_s \approx 1000 m/s

Δx/a106 \Delta x / a \approx 10^{-6}

The time step is limited to less than a microsecond! If the system has any appreciable size, we are probably going to need a lot of compute time to simulate any substantial time scale.

If there are boundary layers that need to be resolved, even finer grid resolution is required. The limit on Δt \Delta t comes from the minimum grid resolution, so non-uniform grid spacings can even further limit the possible time step size.

Such strict constraints on the possible time step can make explicit finite difference methods unfeasible. This motivates implicit methods in which we construct the solution based on data that is not yet available

un+1=f(un,un+1) u^{n+1} = f(u^n, u^{n+1})

Implicit FTCS (Backward Euler)

As an example, let's devise an implicit algorithm based on FTCS where we evaluate Δx \Delta x at time n+1 n+1

ujn+1ujnΔt+auj+1n+1uj1n+12Δx=0 \frac{u_j ^{n+1} - u_j ^n}{\Delta t} + a\frac{ u_{j+1} ^{n+1} - u_{j-1} ^{n+1}}{2 \Delta x} = 0

ujn+1=ujaΔtΔx(uj+1n+1uj1n+12) \rightarrow u_j ^{n+1} = u_j - \frac{a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^{n+1} - u_{j-1} ^{n+1}}{2} \right)

Accuracy: O(Δt,Δx2) O(\Delta t, \Delta x^2)

von Neumann Stability analysis:

εjn+1=εjnaΔtΔx(εj+1n+1εj1n+12) \varepsilon_j ^{n+1} = \varepsilon_j ^n - \frac{ a \Delta t}{\Delta x} \left( \frac{ \varepsilon_{j+1} ^{n+1} - \varepsilon_{j-1} ^{n+1} }{2} \right)

εjn=VneikjΔx \varepsilon_j ^n = V^n e^{i k j \Delta x}

Vn+1Vn[1+aΔt2Δx(eikΔxeikΔx)]=1 \frac{ V^{n+1}}{V^n} \left[ 1 + \frac{ a \Delta t}{2\Delta x} \left( e^{i k \Delta x} - e^{- i k \Delta x} \right) \right] = 1

Vn+1Vn=G=11+a2Δt2Δx2sin2(kΔx) \rightarrow \left| \frac{V^{n+1}}{V^n} \right| = G = \sqrt{ \frac{1}{1 + \frac{a^2 \Delta t^2}{\Delta x^2} \sin ^2(k \Delta x)}}

So the backward Euler method is unconditionally stable! Great, we've got an algorithm that's unconditionally stable. But how do we actually solve it?

We can write backward Euler as a matrix equation Ax=b \overline{A} \overline{x} = \overline{b} where

  • A \overline{A} = operator matrix
  • x \overline{x} = solution vector (un+1 u^{n+1} )
  • b \overline{b} = inhomogeneity vector (un u^n )

Getting to the matrix expression, re-write Backward Euler by putting all of the terms at future times on the left-hand side (in order of increasing j j index), and all of the terms at known times on the right-hand side.

aΔtΔxuj1n+1+ujn+1+aΔt2Δxuj+1n+1=unn - \frac{ a \Delta t}{\Delta x} u_{j-1} ^{n+1} + u_j ^{n+1} + \frac{a \Delta t}{2 \Delta x} u^{n+1} _{j+1} = u_n ^n

Ax=b \overline{A} \overline{x} = \overline{b}

[1(aΔt2Δx)000(aΔt2Δx)1(aΔt2Δx)00(aΔt2Δx)1][u1n+1uj1n+1ujn+1uj+1n+1uJn+1]=[u1nuj1nujnuj+1nuJn] \begin{bmatrix} 1 & \left( \frac{a \Delta t}{2 \Delta x} \right) & 0 & 0 & \ldots & \ldots & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \ldots & 0 & - \left( \frac{a \Delta t}{2 \Delta x} \right) & 1 & \left( \frac{a \Delta t}{2 \Delta x} \right) & 0 & \ldots \\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \\ \ldots &\ldots & \ldots & \ldots & 0 & - \left( \frac{a \Delta t}{2 \Delta x} \right) & 1 \end{bmatrix} \begin{bmatrix} u_1 ^{n+1} \\ \ldots \\ u_{j-1} ^{n+1} \\ u_j ^{n+1} \\ u_{j+1} ^{n+1} \\ \ldots \\ u_J ^{n+1} \end{bmatrix} = \begin{bmatrix} u_1 ^{n} \\ \ldots \\ u_{j-1} ^{n} \\ u_j ^{n} \\ u_{j+1} ^{n} \\ \ldots \\ u_J ^{n} \end{bmatrix}

In matrix form, it's easy to see how we can get to x \overline{x} from b \overline{b}

x=A1b \rightarrow x = A^{-1} b

In this case the operator matrix is tridiagonal, so it can be easily inverted.

Theta Algorithm

We can get a more accurate form if we combine Backward Euler with FTCS with some multiplier

ujn+1+ΘaΔtΔx(uj+1n+1uj1n+12)=ujn(1Θ)aΔtΔx(uj+1nuj1n2) u_j ^{n+1} + \Theta \frac{ a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^{n+1} - u_{j-1} ^{n+1}}{2} \right) = u_j ^n - (1 - \Theta) \frac{ a \Delta t}{\Delta x} \left( \frac{ u_{j+1}^n - u_{j-1} ^n}{2} \right)

We call this method the Θ \Theta -method. If we write in terms of our differential operators,

Δtun+a(1Θ)δxun+aΘδxun+1=0 \Delta _t u^n +a (1 - \Theta) \delta _x u^n + a \Theta \delta _x u^{n+1} = 0

As we'll see, Θ=[0,1] \Theta = [0,1] for stability.

  • Θ=1 \Theta = 1 : is just Backward Euler
  • Θ=0 \Theta = 0 : we've just got FTCS
  • Θ=1/2 \Theta = 1/2 : This is called Crank-Nicolson

We find that by using Θ1 \Theta \neq 1 , we can improve the performance (condition number of operator matrix) by making the operator matrix more diagonally dominant. It can also reduce diffusion (possibly making it better for shock capture), with the flip-side being an increase in dispersion. It can also increase the temporal accuracy -- we find that for Θ=1/2 \Theta = 1/2 , the accuracy is O(Δt2,Δx2) O(\Delta t^2, \Delta x^2)