By definition, Finite Differencing is a method to approximate partial differential equations which we cannot solve, into a system of algebraic equations which we can.
Notation to simplify our representations:
Superscripts: We use superscripts to denote steps in the time domain tn=nΔt. Here n=[0,N] is the step index and Δt is the time-step T/N
Subscripts: xj=jΔx. Here j=[0,J] is the step index and Δx=L/J is the spatial step.
Together: ujn=u(tn,xj)
Explicit algorithms: Use data that is already known at the present time to advance the solution to the next time step. They are easier and faster to implement, but they introduce stability constraints.
ujn+1=f(ujn,uj+1n,uj+2n…)
Implicit algorithms: Use data from the next time step when advancing the solution. Leads to a system of equations that must be solved simultaneously
ujn+1=f(ujn,uj+1n+1,uj−1n+1,…)
One look at the typical definition of the derivative suggests an algebraic approximation
Applying to our model equation, the linear advection equation
∂t∂u+c∂x∂u=0
The first-order forward difference approximation for the temporal derivative is
∂t∂u≈Δtujn+1−ujn≡Δtu
And the first-order backward approximation for ∂x∂u is
∂x∂u≈Δxujn−uj−1n≡∇xu
These are sometimes called Euler differencing, since they are first-order. Plugging in our approximations, we arrive at the "Forward Euler Algorithm"
Δtujn+1ujn+cΔxujn−uj−1n=0
This is an explicit scheme: solve for ujn+1
ujn+1=ujn−ΔxcΔt(ujn−uj−1n)
The multiplier out front ΔxcΔt is very important for stability, so we call it
ΔxcΔt= Courant number (or CFL number)
For the problem to be mathematically well-posed, we must know initial conditions and boundary conditions
u(t=0,j)u(t,x=0)
Accuracy
We can be more precise with the error in our difference approximation: the error will be on the order of Δx:
dxdf≈Δxf(x+Δx)−f(x)
→dxdf=Δxf(x+Δx)−f(x)+O(Δx)
As it turns out, we can improve the accuracy of the finite difference operators by using centered differences:
∂x∂u∣∣∣∣∣j=2Δxuj+1n−uj−1n+O(Δx2)
So the forward-time, centered-space (FTCS) PDE becomes
Δtujn+1−ujn+c2Δxuj+1n−uj−1n=0
This is more accurate than the Euler method in space. We can write the accuracy as O(Δt,Δx2), or "it is first-order accurate in time and second-order accurate in space."
How do we get accuracy estimates? The accuracy is defined by using a Taylor series expansion for the finite-difference operators:
ujn+1=ujn+∂t∂uΔt+21∂t2∂2uΔt2+61∂t3∂3uΔt3+…
uj+1n=ujn+∂x∂uΔx+21∂x2∂2uΔx2+61∂x3∂3uΔx3+…
uj−1n=ujn−∂x∂uΔx+21∂x2∂2uΔx2−61∂x3∂3uΔx3+…
Substituting into the Forward Euler algorithm:
ujn+∂t∂uΔt+21∂t2∂2uΔt2+61∂t3∂3uΔt3+…
=ujn−ΔxcΔt(ujn−[ujn−∂x∂uΔx+∂x2∂2u2Δx2±…])
After simplifying and re-arranging/canceling like terms, we get
The left-hand side is the PDE we're trying to solve, so everything on the right-hand side is the error term introduced by our approximation. The solution we're going to get is actually the solution to the modified PDE with all of the error terms. Reading off the lowest-order terms of Δx and Δt we see that the algorithm is first-order accurate in space and time.
Algorithm Requirements
For an algorithm to work, it requires the following properties:
Consistency: Whatever finite difference operator δxu we use, applied to our solution, has to approximate the derivative as the spacing goes to zero
δxu→∂x∂u as Δx→0
Stability: The solution must be bounded, so for some initial condition u the norm goes to zero as the number of points goes to infinity
∣uj−u∣→0 as I→∞
Accuracy: The accuracy is bounded by the finite difference operator
∂x∂u−δxu=O(Δxm)
Convergence: The numerical solution must approach the exact solution as the grid spacing goes to zero
uj→u as Δx→0
A useful theorem is Lax's Theorem: An algorithm that is consistent and stable will converge. This means that we only have to prove consistency, stability, and accuracy.
Consistency
Let's take our Forward Euler algorithm as an example: From analyzing the accuracy, we derived the modified PDE and noted the leading-order error terms. Forward Euler gave O(Δx,Δt). As Δt,Δx→0, we recover the original PDE. Therefore, FE is consistent.
Likewise the FTCS is also consistent, since Δt,Δx→0 also recovers the original PDE. In fact, because the accuracy of FTCS was O(Δx2,Δt) it is also consistent.
Stability
The way to perform the stability analysis is to use Fourier transforms to convert from a discrete spatial domain to a continuous frequency domain. Broadly speaking, the mathematical tools we have at our disposal to analyze stability only apply to continuous functions - you can't take a derivative of a set of discrete points. By transforming to a continuous domain, we can use properties of our basis functions to examine the growth of errors over time. This process is called Von Neumann Stability Analysis.
Example: Stability analysis of FTCS
ujn+1=ujn−2ΔxaΔt(uj+1n−uj−1n)
We define an error norm by subtracting the exact solution
εjn=ujn−u
εjn+1=εjn−2ΔxaΔt(εj+1n−εj−1n)
To study the evolution of the errors, apply a Fourier transform to get to frequency space:
But that's always greater than 1! So FTCS is unconditionally unstable.
Performing the same analysis for Forward Euler, we get a stability condition:
0≤1+2ΔxaΔt(ΔxaΔt−1)(1−cos(kxΔx))≤1
We can see that the CFL number must be
0≤ΔxaΔt≤1
Possible values of kxΔx=λ2πΔx . The minimum of kxΔx occurs at λ=∞, uniform error throughout the domain. This only occurs if the error is everywhere zero, since the boundary conditions are presumed accurate. In practice, the maximum wavelength is equal to twice the length of the domain
λmax=2L
(kxΔx)max=J−1π
The maximum of kxΔx is for λ=0, but this is not possible on a grid with finite grid points. The Nyquist limit tells us that λmin=2Δx. That means that
(kxΔx)max=π
When you violate a stability condition, what do we expect to see? The error growth is largest for the largest value of kxΔx, which corresponds to a wavelength equal to the grid spacing. In practice, that looks like errors that blow up from point to point.
Convergence
We want to show that the numerical solution approaches the exact solution to the original PDE.
Lax's equivalence theorem states:
If an algorithm is consistent and stability requirements are satisfied, the numerical solution will converge to the solution of the original PDE.
This is why consistency and stability is so important; if you have both, then you have convergence, which tells you that the problem you're solving is the problem you are actually trying to solve.
Accuracy
The solution exists at discrete locations for multiple variables, e.g ρ,p,u,v,…, so we want a convenient measure of solution accuracy.
The general p-norm is defined as
p-norm=Lp=[j=1∑J∣z∣p]1/p
In practice, the most common norms that we use are the 1, 2, or ∞-norms.
L1=∑∣z∣(average)
L2=[∑∣z∣2]1/2(variance)
L∞=max∣z∣(max-norm)
In practice, the 2-norm is the most rigorous definition of the error. Specifically, we define the error norm in the form
Error norm=[j=1∑JΔx(ε)2]1/2=Δx∣∣ε∣∣2
εj=uj−u
Sources of Errors?
Where do errors come from? In order to reduce our model's error, we must first understand where the errors come from.
Truncation errors - Result from the terms in the Taylor series difference approximation that are neglected. They can be reduced by using higher accuracy difference operators.
Round-off errors - Result of the limited machine accuracy (accuracy of floating point representation). It can be reduced by using higher precision to store values, or by performing fewer calculations.
Bugs - Don't have bugs. ha. ha. ha. But really, we can minimize programming errors through practices like good planning, good comments, etc.