Skip to content

Explicit Finite Difference Algorithms

Explicit Algorithms

Forward Euler (FTFS)

n+1 --------|--------*--------|--------

n --------|------------------------ j-1 j j+1


<p id='katexcfc6f92425fa92c6a6db4ab5fe2555eb'>katexcfc6f92425fa92c6a6db4ab5fe2555eb</p>

The accuracy is <span id='katex29f550b1c52bbc78b4a8e647035b080d'>katex29f550b1c52bbc78b4a8e647035b080d</span>. Previously we saw that the von Neumann stability analysis shows

<p id='katex4638d6994b6c0e44942466ad0de921b4'>katex4638d6994b6c0e44942466ad0de921b4</p>

<p id='katexd6f481218a6ed30094fa82cc5b4ffeec'>katexd6f481218a6ed30094fa82cc5b4ffeec</p>

### Forward-Time Centered Space (FTCS)

<p id='katexcba1d121fcfc4c557b4f87d218f64845'>katexcba1d121fcfc4c557b4f87d218f64845</p>

We also found that this is unconditionally unstable.

### Leap-Frog

<p id='katex7e039042e49b9c3645edbb6eb231181f'>katex7e039042e49b9c3645edbb6eb231181f</p>


n+1 --------|--------*--------|--------


n   --------*--------|--------*--------


n-1 --------|--------*--------|--------
            j-1      j        j+1

Accuracy: O(Δt2,Δx2) O(\Delta t^2, \Delta x^2)

Stability: It is marginally stable (G=1 G = 1 ) if aΔtΔx1 \left| \frac{ a \Delta t}{\Delta x} \right| \leq 1

It is also time-reversible, so we call it a symplectic integrator. This is a good thing! A time-reversible integrator will conserve energy.

One big problem with Leap-Frog is the starting problem. How do we compute the very first iteration to get n=1 n=1 when we don't have information about n=1 n=-1 ? We need to "start" the algorithm by calculating the first step using a different method.

Lax Algorithm

mathujn+1=uj+1n+uj1n2aΔtΔx(uj+1nuj1n2)``` ```math u_j ^{n+1} = \frac{u_{j+1} ^n + u_{j-1} ^n}{2} - \frac{a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^n - u_{j-1} ^n}{2} \right)

Accuracy: O(Δt,Δx2/Δt) O(\Delta t, \Delta x^2/\Delta t)

Stability: aΔtΔx1 \left| \frac{ a \Delta t}{\Delta x} \right| \leq 1

It may not be consistent if Δt0 \Delta t \rightarrow 0 faster than Δx20 \Delta x^2 \rightarrow 0

We can re-write the Lax algorithm as

ujn+1=ujnaΔtΔx(uj+1nuj1n2)+uj+1n2ujn+uj1n2 u_j ^{n+1} = u_j ^n - \frac{ a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^n - u_{j-1} ^n}{2} \right) + \frac{ u_{j+1} ^n - 2 u_j ^n + u_{j-1} ^n}{2}

Check out the last term - it's a difference operator for the second derivative. So what PDE do we approximate?

ut+aux=Δx2Δt2ux2 \pdv{u}{t} + a \pdv{u}{x} = \frac{\Delta x^2}{\Delta t} \pdv{^2 u}{x^2}

So we've added a second-derivative term to the linear advection equation. A second-derivative term acts as an artificial viscosity term, with viscosity Δx2/Δt \Delta x^2 / \Delta t .

Lax-Wendroff Algorithm

As we just saw, adding an artificial viscosity term can help to stabilize an unstable algorithm. If we take a Taylor series expansion

ujn+1=ujn+Δtut+Δt22utt+Δt36uttt+ u_j ^{n+1} = u_j ^n + \Delta t u_t + \frac{ \Delta t ^2}{2} u_{tt} + \frac{\Delta t^3}{6} u_{ttt} + \ldots

and use our governing equation to replace some of the time derivatives with spatial derivatives

ut=cux u_t = - c u_x

utt=c2uxx u_{tt} = c^2 u_{xx}

Then we get

ujn+1=ujncΔtux+c2Δt22uxx u_{j} ^{n+1} = u_j ^n - c \Delta t u_x + \frac{c^2 \Delta t^2}{2} u_{xx} - \ldots

Substitute into expansion using centered finite difference operators

uxuj+1nuj1n2Δx2ux2uj+1n2ujn+uj1n2Δx2 \pdv{u}{x} \approx \frac{u_{j+1} ^n - u_{j-1} ^n}{2 \Delta x} \qquad \pdv{^2 u}{x^2} \approx \frac{u_{j+1} ^n - 2 u_j ^n + u_{j-1} ^n}{2 \Delta x^2}

ujn+1=ujnaΔt2Δx(uj+1nuj1n)+a2Δt22Δx2(uj+1n2ujn+uj1n) \rightarrow u_j ^{n+1} = u_j ^n - \frac{ a \Delta t}{2 \Delta x} ( u_{j+1} ^n - u_{j-1} ^n) + \frac{ a^2 \Delta t^2}{2 \Delta x ^2}(u_{j+1} ^n - 2 u_j ^n + u_{j-1} ^n)

Checking back up the page, we see the FTCS embedded directly in the algorithm, and we've got the same form of the second derivative but with a different multiplier. Instead of a 1 1 we've got a2Δt2Δx2 \frac{ a^2 \Delta t^2}{\Delta x^2}

What PDE are we approximating?

ut+aux=a2Δt2ux2 \pdv{u}{t} + a \pdv{u}{x} = a^2 \Delta t \pdv{^2u}{x^2}

Again we've added artificial viscosity, but now the viscosity only depends directly on Δt \Delta t and goes to zero as Δt0 \Delta t \rightarrow 0 .

We find that the Lax-Wendroff algorithm is O(Δt2,Δx2) O(\Delta t^2, \Delta x^2)

n+1 --------|--------*--------|--------

n ------------------------*-------- j-1 j j+1


Another advantage of Lax-Wendroff over the Leap-Frog algorithm is that it avoids the starting problem. We don't need any information from step <span id='katexa553465fa7dd567c5c4bb3947dff9249'>katexa553465fa7dd567c5c4bb3947dff9249</span>, so it doesn't need to be started. How can that happen?

Write Lax-Wendroff as a 2-step method (predictor-corrector). Say we're going to use a Lax algorithm to advance to a midpoint


n+1    --------|-----------------*-----------------|--------


n+1/2  -----------------*-----------------*-----------------
                        j-1/2             j+1/2

n      --------|-----------------*-----------------|--------
               j-1               j                 j+1

mathuj+12n+12=uj+1n+ujn2aΔtΔx(uj+1nujn2)``` ``` ```math u_{j+\frac{1}{2}} ^{n + \frac{1}{2}} = \frac{u^n _{j+1} + u_j ^n}{2} - \frac{a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^n - u_j ^n}{2} \right)

ujn+1=ujnaΔtΔx(uj+1/2n+1/2uj1/2n+1/2) u_j ^{n+1} = u_j ^n - \frac{a \Delta t}{\Delta x} \left( u_{j+1/2} ^{n+1/2} - u_{j-1/2} ^{n+1/2} \right)

That's why Lax-Wendroff is the most popular explicit algorithm for linear PDEs.