Explicit Finite Difference Algorithms

Explicit Finite Difference Algorithms #

Explicit Algorithms #

Forward Euler (FTFS) #

n+1 --------|--------*--------|--------


n   --------|--------*--------*--------
            j-1      j        j+1
\[u_j ^{n+1} = u_j ^n - \frac{a \Delta t}{\Delta x} (u_{j+1} ^n - u_j ^n)\]

The accuracy is \( O(\Delta t, \Delta x) \). Previously we saw that the von Neumann stability analysis shows

\[a > 0 \rightarrow \text{ Unconditionally unstable}\] \[a < 0 \rightarrow \left| \frac{ a \Delta t}{\Delta x} \leq 1 \right| \quad \text{ for stability }\]

Forward-Time Centered Space (FTCS) #

\[u_j ^{n+1} = u_j ^n - \frac{a \Delta t}{\Delta x} \frac{u_{j+1} ^n - u_{j-1} ^n}{2}\]

We also found that this is unconditionally unstable.

Leap-Frog #

\[u_{j}^{n+1} = u_j ^{n-1} - \frac{a \Delta t}{\Delta x} (u_{j+1} ^n - u_{j-1} ^n)\]

n+1 --------|--------*--------|--------


n   --------*--------|--------*--------


n-1 --------|--------*--------|--------
            j-1      j        j+1

Accuracy: \( O(\Delta t^2, \Delta x^2) \)

Stability: It is marginally stable (\( G = 1 \)) if \( \left| \frac{ a \Delta t}{\Delta x} \right| \leq 1 \)

It is also time-reversible, so we call it a symplectic integrator. This is a good thing! A time-reversible integrator will conserve energy.

One big problem with Leap-Frog is the starting problem. How do we compute the very first iteration to get \( n=1 \) when we don’t have information about \( n=-1 \)? We need to “start” the algorithm by calculating the first step using a different method.

Lax Algorithm #

\[u_j ^{n+1} = \frac{u_{j+1} ^n + u_{j-1} ^n}{2} - \frac{a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^n - u_{j-1} ^n}{2} \right)\]

Accuracy: \( O(\Delta t, \Delta x^2/\Delta t) \)

Stability: \( \left| \frac{ a \Delta t}{\Delta x} \right| \leq 1 \)

It may not be consistent if \( \Delta t \rightarrow 0 \) faster than \( \Delta x^2 \rightarrow 0 \)

We can re-write the Lax algorithm as

\[u_j ^{n+1} = u_j ^n - \frac{ a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^n - u_{j-1} ^n}{2} \right) + \frac{ u_{j+1} ^n - 2 u_j ^n + u_{j-1} ^n}{2}\]

Check out the last term - it’s a difference operator for the second derivative. So what PDE do we approximate?

\[\pdv{u}{t} + a \pdv{u}{x} = \frac{\Delta x^2}{\Delta t} \pdv{^2 u}{x^2}\]

So we’ve added a second-derivative term to the linear advection equation. A second-derivative term acts as an artificial viscosity term, with viscosity \( \Delta x^2 / \Delta t \).

Lax-Wendroff Algorithm #

As we just saw, adding an artificial viscosity term can help to stabilize an unstable algorithm. If we take a Taylor series expansion

\[u_j ^{n+1} = u_j ^n + \Delta t u_t + \frac{ \Delta t ^2}{2} u_{tt} + \frac{\Delta t^3}{6} u_{ttt} + \ldots\]

and use our governing equation to replace some of the time derivatives with spatial derivatives

\[u_t = - c u_x \] \[u_{tt} = c^2 u_{xx}\]

Then we get

\[u_{j} ^{n+1} = u_j ^n - c \Delta t u_x + \frac{c^2 \Delta t^2}{2} u_{xx} - \ldots\]

Substitute into expansion using centered finite difference operators

\[\pdv{u}{x} \approx \frac{u_{j+1} ^n - u_{j-1} ^n}{2 \Delta x} \qquad \pdv{^2 u}{x^2} \approx \frac{u_{j+1} ^n - 2 u_j ^n + u_{j-1} ^n}{2 \Delta x^2}\] \[\rightarrow u_j ^{n+1} = u_j ^n - \frac{ a \Delta t}{2 \Delta x} ( u_{j+1} ^n - u_{j-1} ^n) + \frac{ a^2 \Delta t^2}{2 \Delta x ^2}(u_{j+1} ^n - 2 u_j ^n + u_{j-1} ^n)\]

Checking back up the page, we see the FTCS embedded directly in the algorithm, and we’ve got the same form of the second derivative but with a different multiplier. Instead of a \( 1 \) we’ve got \( \frac{ a^2 \Delta t^2}{\Delta x^2} \)

What PDE are we approximating?

\[\pdv{u}{t} + a \pdv{u}{x} = a^2 \Delta t \pdv{^2u}{x^2}\]

Again we’ve added artificial viscosity, but now the viscosity only depends directly on \( \Delta t \) and goes to zero as \( \Delta t \rightarrow 0 \).

We find that the Lax-Wendroff algorithm is \( O(\Delta t^2, \Delta x^2) \)



n+1 --------|--------*--------|--------


n   --------*--------*--------*--------
            j-1      j        j+1

Another advantage of Lax-Wendroff over the Leap-Frog algorithm is that it avoids the starting problem. We don’t need any information from step \( n-1 \), so it doesn’t need to be started. How can that happen?

Write Lax-Wendroff as a 2-step method (predictor-corrector). Say we’re going to use a Lax algorithm to advance to a midpoint



n+1    --------|-----------------*-----------------|--------


n+1/2  -----------------*-----------------*-----------------
                        j-1/2             j+1/2

n      --------|-----------------*-----------------|--------
               j-1               j                 j+1
\[u_{j+\frac{1}{2}} ^{n + \frac{1}{2}} = \frac{u^n _{j+1} + u_j ^n}{2} - \frac{a \Delta t}{\Delta x} \left( \frac{ u_{j+1} ^n - u_j ^n}{2} \right)\] \[u_j ^{n+1} = u_j ^n - \frac{a \Delta t}{\Delta x} \left( u_{j+1/2} ^{n+1/2} - u_{j-1/2} ^{n+1/2} \right)\]

That’s why Lax-Wendroff is the most popular explicit algorithm for linear PDEs.