Skip to content

21.1 Finite Difference Algorithms

Definitions

By definition, Finite Differencing is a method to approximate partial differential equations which we cannot solve, into a system of algebraic equations which we can.

Notation to simplify our representations:

  • Superscripts: We use superscripts to denote steps in the time domain tn=nΔt t^n = n \Delta t . Here n=[0,N] n = [0, N] is the step index and Δt \Delta t is the time-step T/N T/N
  • Subscripts: xj=jΔx x_j = j \Delta x . Here j=[0,J] j = [0, J] is the step index and Δx=L/J \Delta x = L/J is the spatial step.
  • Together: ujn=u(tn,xj) u_j ^n = u(t^n, x_j)
  • Explicit algorithms: Use data that is already known at the present time to advance the solution to the next time step. They are easier and faster to implement, but they introduce stability constraints.

    ujn+1=f(ujn,uj+1n,uj+2n) u_j ^{n+1} = f(u_j ^n, u_{j+1} ^n , u_{j+2} ^n \ldots)

  • Implicit algorithms: Use data from the next time step when advancing the solution. Leads to a system of equations that must be solved simultaneously

    ujn+1=f(ujn,uj+1n+1,uj1n+1,) u_{j} ^{n+1} = f(u_{j} ^n , u_{j+1} ^{n+1}, u_{j-1} ^{n+1}, \ldots)

One look at the typical definition of the derivative suggests an algebraic approximation

dfdx=limΔx0f(x+Δx)f(x)Δxdfdxf(x+Δx)f(x)Δx \dv{f}{x} = \lim_{\Delta x \rightarrow 0} \frac{f(x + \Delta x) - f(x)}{\Delta x} \quad \rightarrow \quad \dv{f}{x} \approx \frac{f(x + \Delta x) - f(x) }{\Delta x}

Applying to our model equation, the linear advection equation

ut+cux=0 \pdv{u}{t} + c \pdv{u}{x} = 0

The first-order forward difference approximation for the temporal derivative is

utujn+1ujnΔtΔtu \pdv{u}{t} \approx \frac{u_j ^{n+1} - u_j ^n}{\Delta t} \equiv \Delta _t u

And the first-order backward approximation for ux \pdv{u}{x} is

uxujnuj1nΔxxu \pdv{u}{x} \approx \frac{u_{j} ^n - u_{j-1} ^n}{\Delta x} \equiv \nabla _x u

These are sometimes called Euler differencing, since they are first-order. Plugging in our approximations, we arrive at the "Forward Euler Algorithm"

ujn+1ujnΔt+cujnuj1nΔx=0 \frac{u_{j} ^{n+1} u_j ^n}{\Delta t} + c \frac{u_{j} ^n - u_{j-1} ^n}{\Delta x} = 0

This is an explicit scheme: solve for ujn+1 u^{n+1} _j

ujn+1=ujncΔtΔx(ujnuj1n) u_j ^{n+1} = u_j ^n - \frac{c \Delta t}{\Delta x} (u_j ^n - u_{j-1} ^n)

The multiplier out front cΔtΔx \frac{c \Delta t}{\Delta x} is very important for stability, so we call it

cΔtΔx= Courant number (or CFL number) \frac{c \Delta t}{\Delta x} = \text{ Courant number (or CFL number)}

For the problem to be mathematically well-posed, we must know initial conditions and boundary conditions

u(t=0,j)u(t,x=0) u(t = 0, j) \qquad u(t, x = 0)

Accuracy

We can be more precise with the error in our difference approximation: the error will be on the order of Δx \Delta x :

dfdxf(x+Δx)f(x)Δx \dv{f}{x} \approx \frac{f(x + \Delta x) - f(x) }{\Delta x}

dfdx=f(x+Δx)f(x)Δx+O(Δx) \rightarrow \dv{f}{x} = \frac{f(x + \Delta x) - f(x) }{\Delta x} + O(\Delta x)

As it turns out, we can improve the accuracy of the finite difference operators by using centered differences:

uxj=uj+1nuj1n2Δx+O(Δx2) \left. \pdv{u}{x} \right|_{j} = \frac{u_{j+1} ^n - u_{j-1} ^n}{2 \Delta x} + O(\Delta x ^2)

So the forward-time, centered-space (FTCS) PDE becomes

ujn+1ujnΔt+cuj+1nuj1n2Δx=0 \frac{u_j ^{n+1} - u_j ^n}{\Delta t} + c \frac{u_{j+1} ^n - u_{j-1} ^n }{2 \Delta x} = 0

This is more accurate than the Euler method in space. We can write the accuracy as O(Δt,Δx2) O(\Delta t, \Delta x^2) , or "it is first-order accurate in time and second-order accurate in space."

How do we get accuracy estimates? The accuracy is defined by using a Taylor series expansion for the finite-difference operators:

ujn+1=ujn+utΔt+122ut2Δt2+163ut3Δt3+ u_j ^{n+1} = u_j ^n + \pdv{u}{t} \Delta t + \frac{1}{2} \pdv{ ^2 u}{t^2} \Delta t^2 + \frac{1}{6} \pdv{ ^3 u}{t ^3} \Delta t^3 + \ldots

uj+1n=ujn+uxΔx+122ux2Δx2+163ux3Δx3+ u_{j+1} ^n = u_j ^n + \pdv{u}{x} \Delta x + \frac{1}{2} \pdv{ ^2 u}{x^2} \Delta x^2 + \frac{1}{6} \pdv{ ^3 u }{x ^3} \Delta x ^3 + \ldots

uj1n=ujnuxΔx+122ux2Δx2163ux3Δx3+ u_{j-1} ^n = u_j ^n - \pdv{u}{x} \Delta x + \frac{1}{2} \pdv{ ^2 u }{x^2} \Delta x^2 - \frac{1}{6} \pdv{^3 u }{x^3} \Delta x ^3 + \ldots

Substituting into the Forward Euler algorithm:

ujn+utΔt+122ut2Δt2+163ut3Δt3+ u_j ^n + \pdv{u}{t} \Delta t + \frac{1}{2} \pdv{^2 u }{t^2} \Delta t^2 + \frac{1}{6} \pdv{ ^3 u}{t ^3} \Delta t^3 + \ldots

=ujncΔtΔx(ujn[ujnuxΔx+2ux2Δx22±]) = u_j ^n - \frac{c \Delta t}{\Delta x} \left( u_j ^n - \left[ u_j ^n - \pdv{u}{x} \Delta x + \pdv{^2 u }{x^2} \frac{\Delta x ^2}{2} \pm \ldots \right] \right)

After simplifying and re-arranging/canceling like terms, we get

ut+cux=122ut2Δt16Δt2++c22ux2Δxc63ux3Δx2+ \pdv{u}{t} + c \pdv{u}{x} = - \frac{1}{2} \pdv{^2 u}{t^2} \Delta t - \frac{1}{6} \Delta t^2 + \ldots + \frac{c}{2} \pdv{^2 u}{x^2} \Delta x - \frac{c}{6} \pdv{ ^3 u }{x ^3} \Delta x ^2 + \ldots

The left-hand side is the PDE we're trying to solve, so everything on the right-hand side is the error term introduced by our approximation. The solution we're going to get is actually the solution to the modified PDE with all of the error terms. Reading off the lowest-order terms of Δx \Delta x and Δt \Delta t we see that the algorithm is first-order accurate in space and time.

Algorithm Requirements

For an algorithm to work, it requires the following properties:

  1. Consistency: Whatever finite difference operator δxu \delta_x u we use, applied to our solution, has to approximate the derivative as the spacing goes to zero

    δxuux as Δx0 \delta _x u \rightarrow \pdv{u}{x} \quad \text{ as } \quad \Delta x \rightarrow 0

  2. Stability: The solution must be bounded, so for some initial condition u u the norm goes to zero as the number of points goes to infinity

    uju0 as I |u_j - u| \rightarrow 0 \quad \text{ as } \quad I \rightarrow \infty

  3. Accuracy: The accuracy is bounded by the finite difference operator

    uxδxu=O(Δxm) \pdv{u}{x} - \delta _x u = O(\Delta x ^m)

  4. Convergence: The numerical solution must approach the exact solution as the grid spacing goes to zero

    uju as Δx0 u_j \rightarrow u \quad \text{ as } \quad \Delta x \rightarrow 0

A useful theorem is Lax's Theorem: An algorithm that is consistent and stable will converge. This means that we only have to prove consistency, stability, and accuracy.

Consistency

Let's take our Forward Euler algorithm as an example: From analyzing the accuracy, we derived the modified PDE and noted the leading-order error terms. Forward Euler gave O(Δx,Δt) O(\Delta x, \Delta t) . As Δt,Δx0 \Delta t, \Delta x \rightarrow 0 , we recover the original PDE. Therefore, FE is consistent.

Likewise the FTCS is also consistent, since Δt,Δx0 \Delta t, \Delta x \rightarrow 0 also recovers the original PDE. In fact, because the accuracy of FTCS was O(Δx2,Δt) O(\Delta x ^2, \Delta t) it is also consistent.

Stability

The way to perform the stability analysis is to use Fourier transforms to convert from a discrete spatial domain to a continuous frequency domain. Broadly speaking, the mathematical tools we have at our disposal to analyze stability only apply to continuous functions - you can't take a derivative of a set of discrete points. By transforming to a continuous domain, we can use properties of our basis functions to examine the growth of errors over time. This process is called Von Neumann Stability Analysis.

Example: Stability analysis of FTCS

ujn+1=ujnaΔt2Δx(uj+1nuj1n) u_j ^{n+1} = u_j ^n - \frac{a \Delta t}{2 \Delta x} \left( u_{j+1} ^n - u_{j-1} ^n \right)

We define an error norm by subtracting the exact solution

εjn=ujnu \varepsilon_j ^n = u_j ^n - \overline{u}

εjn+1=εjnaΔt2Δx(εj+1nεj1n) \varepsilon_j ^{n+1} = \varepsilon_j ^n - \frac{a \Delta t}{2 \Delta x} \left( \varepsilon_{j+1} ^n - \varepsilon_{j-1} ^n \right)

To study the evolution of the errors, apply a Fourier transform to get to frequency space:

εjn=Vneikxx=Vneikx(jΔx) \varepsilon_j ^n = V^n e^{ik_x x} = V^n e^{ik_x (j \Delta x)}

Vn= wave amplitude V^n = \text{ wave amplitude}

k= wavenumber =2πλx k = \text{ wavenumber } = \frac{2 \pi}{\lambda_x}

Substituting,

Vn+1eikxjΔx=VneikxjΔxaΔt2Δx(Vneikx(j+1)ΔxVneikx(j1)Δx) V^{n+1} e^{i k_x j \Delta x} = V^n e^{i k_x j \Delta x} - \frac{a \Delta t}{2 \Delta x} \left(V^n e^{i k_x (j+1) \Delta x } - V^n e^{i k_x (j-1) \Delta x} \right)

Vn+1Vn=1aΔt2Δx(eikxΔxeikxΔx) \rightarrow \frac{V^{n+1}}{V_n} = 1 - \frac{a \Delta t}{2 \Delta x} \left( e^{i k_x \Delta x} - e^{ik_x \Delta x} \right)

The ratio Vn+1/Vn V^{n+1}/V^n tells us how the amplitudes of the errors will evolve in time. We define the amplification factor G G

G=Vn+1Vn G = \left| \frac{V^{n+1}}{V^n} \right|

For stability, we need the errors not to grow over time, so the Von Neumann stability criterion is:

Gm1n G^m \leq 1 \quad \forall n

Continuing with our example

Vn+1Vn=1aΔt2Δx[cos(kxΔx)+isin(kΔx)cos(kxΔx)isin(kxΔx)] \frac{V^{n+1}}{V_n} = 1 - \frac{a \Delta t}{2 \Delta x} \left[ \cos (k_x \Delta x) + i \sin(k \Delta x) - \cos (-k_x \Delta x) - i \sin (- k_x \Delta x) \right]

=1iaΔtΔxsin(kxΔx) = 1 - i \frac{ a \Delta t}{\Delta x} \sin (k_x \Delta x)

To get the amplification factor, we need the norm

G=[(1iaΔtΔxsin(kxΔx))(1+iaΔtΔxsin(kxΔx))]1/2 G = \left[ \left(1 - i \frac{ a \Delta t}{\Delta x} \sin (k_x \Delta x) \right) \left( 1 + i \frac{ a \Delta t}{\Delta x} \sin (k_x \Delta x)\right) \right] ^{1/2}

=1+(aΔtΔx2sin2(kxΔx)) = \sqrt{ 1 + \left( \frac{a \Delta t}{\Delta x} ^2 \sin ^2 (k_x \Delta x)\right)}

But that's always greater than 1! So FTCS is unconditionally unstable.

Performing the same analysis for Forward Euler, we get a stability condition:

01+2aΔtΔx(aΔtΔx1)(1cos(kxΔx))1 0 \leq 1 + 2\frac{a \Delta t}{\Delta x} \left( \frac{a \Delta t}{ \Delta x} -1 \right) (1 - \cos (k_x \Delta x)) \leq 1

Figure 20.4

We can see that the CFL number must be

0aΔtΔx1 0 \leq \frac{a \Delta t}{\Delta x} \leq 1

Possible values of kxΔx=2πλΔx k_x \Delta x = \frac{2 \pi}{\lambda} \Delta x . The minimum of kxΔx k_x \Delta x occurs at λ= \lambda = \infty , uniform error throughout the domain. This only occurs if the error is everywhere zero, since the boundary conditions are presumed accurate. In practice, the maximum wavelength is equal to twice the length of the domain

λmax=2L \lambda_{max} = 2L

(kxΔx)max=πJ1 (k_x \Delta x)_{max} = \frac{\pi}{J-1}

The maximum of kxΔx k_x \Delta x is for λ=0 \lambda = 0 , but this is not possible on a grid with finite grid points. The Nyquist limit tells us that λmin=2Δx \lambda_{min} = 2 \Delta x . That means that

(kxΔx)max=π (k_x \Delta x)_{max} = \pi

When you violate a stability condition, what do we expect to see? The error growth is largest for the largest value of kxΔx k_x \Delta x , which corresponds to a wavelength equal to the grid spacing. In practice, that looks like errors that blow up from point to point.

Convergence

We want to show that the numerical solution approaches the exact solution to the original PDE.

Lax's equivalence theorem states:

If an algorithm is consistent and stability requirements are satisfied, the numerical solution will converge to the solution of the original PDE.

This is why consistency and stability is so important; if you have both, then you have convergence, which tells you that the problem you're solving is the problem you are actually trying to solve.

Accuracy

The solution exists at discrete locations for multiple variables, e.g ρ,p,u,v, \rho, p, u, v, \ldots , so we want a convenient measure of solution accuracy.

The general p-norm is defined as

p-norm=Lp=[j=1Jzp]1/p \text{p-norm} = L_p = \left[\sum_{j=1} ^J |z|^p \right]^{1/p}

In practice, the most common norms that we use are the 1, 2, or \infty -norms.

L1=z(average) L_1 = \sum |z| \quad \text{(average)}

L2=[z2]1/2(variance) L_2 = \left[ \sum |z|^2 \right]^{1/2} \quad \text{(variance)}

L=maxz(max-norm) L_{\infty} = \text{max}|z| \quad \text{(max-norm)}

In practice, the 2-norm is the most rigorous definition of the error. Specifically, we define the error norm in the form

Error norm=[j=1JΔx(ε)2]1/2=Δxε2 \text{Error norm} = \left[ \sum_{j=1} ^{J} \Delta x ( \varepsilon) ^2 \right] ^{1/2} = \sqrt{\Delta x} || \varepsilon ||_2

εj=uju \varepsilon_j = u_j - \overline{u}

Sources of Errors?

Where do errors come from? In order to reduce our model's error, we must first understand where the errors come from.

Truncation errors - Result from the terms in the Taylor series difference approximation that are neglected. They can be reduced by using higher accuracy difference operators.

Round-off errors - Result of the limited machine accuracy (accuracy of floating point representation). It can be reduced by using higher precision to store values, or by performing fewer calculations.

Bugs - Don't have bugs. ha. ha. ha. But really, we can minimize programming errors through practices like good planning, good comments, etc.