A very wide range of physical processes lead to wave motion, where signals are propagated through a medium in space and time, normally with little or no permanent movement of the medium itself. The shape of the signals may undergo changes as they travel through matter, but usually not so much that the signals cannot be recognized at some later point in space and time. Many types of wave motion can be described by the equation \(u_{tt}=\nabla\cdot (c^2\nabla u) + f\), which we will solve in the forthcoming text by finite difference methods.

# Simulation of waves on a string¶

We begin our study of wave equations by simulating one-dimensional waves on a string, say on a guitar or violin. Let the string in the undeformed state coincide with the interval \([0,L]\) on the \(x\) axis, and let \(u(x,t)\) be the displacement at time \(t\) in the \(y\) direction of a point initially at \(x\). The displacement function \(u\) is governed by the mathematical model

The constant \(c\) and the function \(I(x)\) must be prescribed.

Equation (1) is known as the one-dimensional
*wave equation*. Since this PDE contains a second-order derivative
in time, we need *two initial conditions*. The condition
(2) specifies
the initial shape of the string, \(I(x)\), and
(3) expresses that the initial velocity of the
string is zero. In addition, PDEs need *boundary conditions*, given here as
(4) and (5). These two
conditions specify that
the string is fixed at the ends, i.e., that the displacement \(u\) is zero.

The solution \(u(x,t)\) varies in space and time and describes waves that move with velocity \(c\) to the left and right.

Sometimes we will use a more compact notation for the partial derivatives to save space:

and similar expressions for derivatives with respect to other variables. Then the wave equation can be written compactly as \(u_{tt} = c^2u_{xx}\).

The PDE problem (1)-(5) will now be discretized in space and time by a finite difference method.

## Discretizing the domain¶

The temporal domain \([0,T]\) is represented by a finite number of mesh points

Similarly, the spatial domain \([0,L]\) is replaced by a set of mesh points

One may view the mesh as two-dimensional in the \(x,t\) plane, consisting of points \((x_i, t_n)\), with \(i=0,\ldots,N_x\) and \(n=0,\ldots,N_t\).

### Uniform meshes¶

For uniformly distributed mesh points we can introduce the constant mesh spacings \(\Delta t\) and \(\Delta x\). We have that

We also have that \(\Delta x = x_i-x_{i-1}\), \(i=1,\ldots,N_x\), and \(\Delta t = t_n - t_{n-1}\), \(n=1,\ldots,N_t\). Figure displays a mesh in the \(x,t\) plane with \(N_t=5\), \(N_x=5\), and constant mesh spacings.

## The discrete solution¶

The solution \(u(x,t)\) is sought at the mesh points. We introduce the mesh function \(u_i^n\), which approximates the exact solution at the mesh point \((x_i,t_n)\) for \(i=0,\ldots,N_x\) and \(n=0,\ldots,N_t\). Using the finite difference method, we shall develop algebraic equations for computing the mesh function.

## Fulfilling the equation at the mesh points¶

In the finite difference method, we relax
the condition that (1) holds at all points in
the space-time domain \((0,L)\times (0,T]\) to the requirement that the PDE is
fulfilled at the *interior* mesh points only:

for \(i=1,\ldots,N_x-1\) and \(n=1,\ldots,N_t-1\). For \(n=0\) we have the initial conditions \(u=I(x)\) and \(u_t=0\), and at the boundaries \(i=0,N_x\) we have the boundary condition \(u=0\).

## Replacing derivatives by finite differences¶

The second-order derivatives can be replaced by central differences. The most widely used difference approximation of the second-order derivative is

mathcal{I}_t is convenient to introduce the finite difference operator notation

A similar approximation of the second-order derivative in the \(x\) direction reads

### Algebraic version of the PDE¶

We can now replace the derivatives in (10) and get

or written more compactly using the operator notation:

### Interpretation of the equation as a stencil¶

A characteristic feature of (11) is that it
involves \(u\) values from neighboring points only: \(u_i^{n+1}\),
\(u^n_{i\pm 1}\), \(u^n_i\), and \(u^{n-1}_i\). The circles in Figure illustrate such neighboring mesh points that
contribute to an algebraic equation. In this particular case, we have
sampled the PDE at the point \((2,2)\) and constructed
(11), which then involves a coupling of \(u_1^2\),
\(u_2^3\), \(u_2^2\), \(u_2^1\), and \(u_3^2\). The term *stencil* is often
used about the algebraic equation at a mesh point, and the geometry of
a typical stencil is illustrated in Figure. One also often refers to the algebraic
equations as *discrete equations*, *(finite) difference equations* or
a *finite difference scheme*.

Mesh in space and time. The circles show points connected in a finite difference equation.

### Algebraic version of the initial conditions¶

We also need to replace the derivative in the initial condition (3) by a finite difference approximation. A centered difference of the type

seems appropriate. Writing out this equation and ordering the terms give

The other initial condition can be computed by

## Formulating a recursive algorithm¶

We assume that \(u^n_i\) and \(u^{n-1}_i\) are available for \(i=0,\ldots,N_x\). The only unknown quantity in (11) is therefore \(u^{n+1}_i\), which we now can solve for:

We have here introduced the parameter

known as the *Courant number*.

**\(C\) is the key parameter in the discrete wave equation.**

We see that the discrete version of the PDE features only one parameter, \(C\), which is therefore the key parameter, together with \(N_x\), that governs the quality of the numerical solution (see the section Analysis of the difference equations for details). Both the primary physical parameter \(c\) and the numerical parameters \(\Delta x\) and \(\Delta t\) are lumped together in \(C\). Note that \(C\) is a dimensionless parameter.

Given that \(u^{n-1}_i\) and \(u^n_i\) are known for \(i=0,\ldots,N_x\), we find new values at the next time level by applying the formula (14) for \(i=1,\ldots,N_x-1\). Figure illustrates the points that are used to compute \(u^3_2\). For the boundary points, \(i=0\) and \(i=N_x\), we apply the boundary conditions \(u_i^{n+1}=0\).

Even though sound reasoning leads up to (14), there is still a minor challenge with it that needs to be resolved. Think of the very first computational step to be made. The scheme (14) is supposed to start at \(n=1\), which means that we compute \(u^2\) from \(u^1\) and \(u^0\). Unfortunately, we do not know the value of \(u^1\), so how to proceed? A standard procedure in such cases is to apply (14) also for \(n=0\). This immediately seems strange, since it involves \(u^{-1}_i\), which is an undefined quantity outside the time mesh (and the time domain). However, we can use the initial condition (13) in combination with (14) when \(n=0\) to eliminate \(u^{-1}_i\) and arrive at a special formula for \(u_i^1\):

Figure illustrates how (16) connects four instead of five points: \(u^1_2\), \(u_1^0\), \(u_2^0\), and \(u_3^0\).

Modified stencil for the first time step.

We can now summarize the computational algorithm:

Compute \(u^0_i=I(x_i)\) for \(i=0,\ldots,N_x\)

Compute \(u^1_i\) by (16) for \(i=1,2,\ldots,N_x-1\) and set \(u_i^1=0\) for the boundary points given by \(i=0\) and \(i=N_x\),

For each time level \(n=1,2,\ldots,N_t-1\)

a. apply (14) to find \(u^{n+1}_i\) for \(i=1,\ldots,N_x-1\)

b. set \(u^{n+1}_i=0\) for the boundary points having \(i=0\), \(i=N_x\).

The algorithm essentially consists of moving a finite difference stencil through all the mesh points, which can be seen as an animation in a web page or a movie file.

## Sketch of an implementation¶

We start by defining some constants that will be used throughout our Devito code.

```
import numpy as np
# Given mesh points as arrays x and t (x[i], t[n]),
# constant c and function I for initial condition
x = np.linspace(0, 2, 101)
t = np.linspace(0, 2, 101)
c = 1
I = lambda x: np.sin(x)
dx = x[1] - x[0]
dt = t[1] - t[0]
C = c*dt/dx # Courant number
Nx = len(x)-1
Nt = len(t)-1
C2 = C**2 # Help variable in the scheme
L = 2.
```

Next, we define our 1D computational grid and create a function `u`

as a symbolic `devito.TimeFunction`

. We need to specify the `space_order`

as 2 since our wave equation involves second-order derivatives with respect to \(x\). Similarly, we specify the `time_order`

as 2, as our equation involves second-order derivatives with respect to \(t\). Setting these parameters allows us to use `u.dx2`

and `u.dt2`

.

```
from devito import Grid, TimeFunction
# Initialise `u` for space and time order 2, using initialisation function I
grid = Grid(shape=(Nx+1), extent=(L))
u = TimeFunction(name='u', grid=grid, time_order=2, space_order=2)
u.data[:,:] = I(x[:])
```

Now that we have initialised `u`

, we can solve our wave equation for the unknown quantity \(u^{n+1}_i\) using forward and backward differences in space and time.

```
from devito import Constant, Eq, solve
# Set up wave equation and solve for forward stencil point in time
pde = (1/c**2)*u.dt2-u.dx2
stencil = Eq(u.forward, solve(pde, u.forward))
print("LHS: %s" % stencil.lhs)
print("RHS: %s" % stencil.rhs)
```

```
LHS: u(t + dt, x)
RHS: 1.0*dt**2*(-2.0*u(t, x)/h_x**2 + u(t, x - h_x)/h_x**2 + u(t, x + h_x)/h_x**2 + 2.0*u(t, x)/dt**2 - 1.0*u(t - dt, x)/dt**2)
```

Great! From these print statements, we can see that Devito has taken the wave equation in (1) and solved it for \(u^{n+1}_i\), giving us equation (14). Note that `dx`

is denoted as `h_x`

, while `u(t, x)`

, `u(t, x - h_x)`

and `u(t, x + h_x)`

denote the equivalent of \(u^{n}_{i}\), \(u^{n}_{i-1}\) and \(u^{n}_{i+1}\) respectively.

We also need to create a separate stencil for the first timestep, where we substitute \(u^{1}_i\) for \(u^{-1}_i\), as given in (13).

```
stencil_init = stencil.subs(u.backward, u.forward)
```

Now we can create expressions for our boundary conditions and build the operator. The results are plotted below.

```
#NBVAL_IGNORE_OUTPUT
from devito import Operator
t_s = grid.stepping_dim
# Boundary conditions
bc = [Eq(u[t_s+1, 0], 0)]
bc += [Eq(u[t_s+1, Nx], 0)]
# Defining one Operator for initial timestep and one for the rest
op_init = Operator([stencil_init]+bc)
op = Operator([stencil]+bc)
op_init.apply(time_M=1, dt=dt)
op.apply(time_m=1,time_M=Nt, dt=dt)
```

```
Data type float64 of runtime value `dt` does not match the Constant data type <class 'numpy.float32'>
```

```
Operator `Kernel` run in 0.01 s
```

```
Data type float64 of runtime value `dt` does not match the Constant data type <class 'numpy.float32'>
```

```
Operator `Kernel` run in 0.01 s
```

```
PerformanceSummary([(PerfKey(name='section0', rank=None),
PerfEntry(time=2.8000000000000003e-05, gflopss=0.0, gpointss=0.0, oi=0.0, ops=0, itershapes=[]))])
```

We can plot our results using `matplotlib`

:

```
import matplotlib.pyplot as plt
plt.plot(x, u.data[-1])
plt.xlabel('x')
plt.ylabel('u')
plt.show()
```

# Verification¶

Before implementing the algorithm, it is convenient to add a source term to the PDE (1), since that gives us more freedom in finding test problems for verification. Physically, a source term acts as a generator for waves in the interior of the domain.

## A slightly generalized model problem¶

We now address the following extended initial-boundary value problem for one-dimensional wave phenomena:

Sampling the PDE at \((x_i,t_n)\) and using the same finite difference approximations as above, yields

Writing this out and solving for the unknown \(u^{n+1}_i\) results in

The equation for the first time step must be rederived. The discretization of the initial condition \(u_t = V(x)\) at \(t=0\) becomes

which, when inserted in (23) for \(n=0\), gives the special formula

## Using an analytical solution of physical significance¶

Many wave problems feature sinusoidal oscillations in time and space. For example, the original PDE problem (1)-(5) allows an exact solution

This \(u_e\) fulfills the PDE with \(f=0\), boundary conditions \(u_e(0,t)=u_e(L,t)=0\), as well as initial conditions \(I(x)=A\sin\left(\frac{\pi}{L}x\right)\) and \(V=0\).

**How to use exact solutions for verification.**

It is common to use such exact solutions of physical interest to verify implementations. However, the numerical solution \(u^n_i\) will only be an approximation to \(u_e(x_i,t_n)\). We have no knowledge of the precise size of the error in this approximation, and therefore we can never know if discrepancies between \(u^n_i\) and \(u_e(x_i,t_n)\) are caused by mathematical approximations or programming errors. In particular, if plots of the computed solution \(u^n_i\) and the exact one (25) look similar, many are tempted to claim that the implementation works. However, even if color plots look nice and the accuracy is “deemed good”, there can still be serious programming errors present!

The only way to use exact physical solutions like (25) for serious and thorough verification is to run a series of simulations on finer and finer meshes, measure the integrated error in each mesh, and from this information estimate the empirical convergence rate of the method.

An introduction to the computing of convergence rates is given in Section 3.1.6 in [Langtangen_decay]. There is also a detailed example on computing convergence rates in the verification section of the Vibration ODEs chapter.

In the present problem, one expects the method to have a convergence rate of 2 (see the section Analysis of the difference equations), so if the computed rates are close to 2 on a sufficiently fine mesh, we have good evidence that the implementation is free of programming mistakes.

## Manufactured solution and estimation of convergence rates¶

### Specifying the solution and computing corresponding data¶

One problem with the exact solution (25) is
that it requires a simplification (\({V}=0, f=0\)) of the implemented problem
(17)-(21). An advantage of using
a *manufactured solution* is that we can test all terms in the
PDE problem. The idea of this approach is to set up some chosen
solution and fit the source term, boundary conditions, and initial
conditions to be compatible with the chosen solution.
Given that our boundary conditions in the implementation are
\(u(0,t)=u(L,t)=0\), we must choose a solution that fulfills these
conditions. One example is

Inserted in the PDE \(u_{tt}=c^2u_{xx}+f\) we get

The initial conditions become

### Defining a single discretization parameter¶

To verify the code, we compute the convergence rates in a series of simulations, letting each simulation use a finer mesh than the previous one. Such empirical estimation of convergence rates relies on an assumption that some measure \(E\) of the numerical error is related to the discretization parameters through

where \(C_t\), \(C_x\), \(r\), and \(p\) are constants. The constants \(r\) and
\(p\) are known as the *convergence rates* in time and space,
respectively. From the accuracy in the finite difference
approximations, we expect \(r=p=2\), since the error terms are of order
\(\Delta t^2\) and \(\Delta x^2\). This is confirmed by truncation error
analysis and other types of analysis.

By using an exact solution of the PDE problem, we will next compute the error measure \(E\) on a sequence of refined meshes and see if the rates \(r=p=2\) are obtained. We will not be concerned with estimating the constants \(C_t\) and \(C_x\), simply because we are not interested in their values.

mathcal{I}_t is advantageous to introduce a single discretization parameter \(h=\Delta t=\hat c \Delta x\) for some constant \(\hat c\). Since \(\Delta t\) and \(\Delta x\) are related through the Courant number, \(\Delta t = C\Delta x/c\), we set \(h=\Delta t\), and then \(\Delta x = hc/C\). Now the expression for the error measure is greatly simplified:

### Computing errors¶

We choose an initial discretization parameter \(h_0\) and run experiments with decreasing \(h\): \(h_i=2^{-i}h_0\), \(i=1,2,\ldots,m\). Halving \(h\) in each experiment is not necessary, but it is a common choice. For each experiment we must record \(E\) and \(h\). Standard choices of error measure are the \(\ell^2\) and \(\ell^\infty\) norms of the error mesh function \(e^n_i\):

In Python, one can compute \(\sum_{i}(e^{n}_i)^2\) at each time step
and accumulate the value in some sum variable, say `e2_sum`

. At the
final time step one can do `sqrt(dt*dx*e2_sum)`

. For the
\(\ell^\infty\) norm one must compare the maximum error at a time level
(`e.max()`

) with the global maximum over the time domain: `e_max = max(e_max, e.max())`

.

An alternative error measure is to use a spatial norm at one time step only, e.g., the end time \(T\) (\(n=N_t\)):

The important point is that the error measure (\(E\)) for the simulation is represented by a single number.

### Computing rates¶

Let \(E_i\) be the error measure in experiment (mesh) number \(i\) (not to be confused with the spatial index \(i\)) and let \(h_i\) be the corresponding discretization parameter (\(h\)). With the error model \(E_i = Dh_i^r\), we can estimate \(r\) by comparing two consecutive experiments:

Dividing the two equations eliminates the (uninteresting) constant \(D\). Thereafter, solving for \(r\) yields

Since \(r\) depends on \(i\), i.e., which simulations we compare, we add an index to \(r\): \(r_i\), where \(i=0,\ldots,m-2\), if we have \(m\) experiments: \((h_0,E_0),\ldots,(h_{m-1}, E_{m-1})\).

In our present discretization of the wave equation we expect \(r=2\), and hence the \(r_i\) values should converge to 2 as \(i\) increases.

## Constructing an exact solution of the discrete equations¶

With a manufactured or known analytical solution, as outlined above, we can estimate convergence rates and see if they have the correct asymptotic behavior. Experience shows that this is a quite good verification technique in that many common bugs will destroy the convergence rates. A significantly better test though, would be to check that the numerical solution is exactly what it should be. This will in general require exact knowledge of the numerical error, which we do not normally have (although we in the section Analysis of the difference equations establish such knowledge in simple cases). However, it is possible to look for solutions where we can show that the numerical error vanishes, i.e., the solution of the original continuous PDE problem is also a solution of the discrete equations. This property often arises if the exact solution of the PDE is a lower-order polynomial. (Truncation error analysis leads to error measures that involve derivatives of the exact solution. In the present problem, the truncation error involves 4th-order derivatives of \(u\) in space and time. Choosing \(u\) as a polynomial of degree three or less will therefore lead to vanishing error.)

We shall now illustrate the construction of an exact solution to both the PDE itself and the discrete equations. Our chosen manufactured solution is quadratic in space and linear in time. More specifically, we set

which by insertion in the PDE leads to \(f(x,t)=2(1+t)c^2\). This \(u_e\) fulfills the boundary conditions \(u=0\) and demands \(I(x)=x(L-x)\) and \(V(x)={\frac{1}{2}}x(L-x)\).

To realize that the chosen \(u_e\) is also an exact solution of the discrete equations, we first remind ourselves that \(t_n=n\Delta t\) so that

Hence,

Similarly, we get that

Now, \(f^n_i = 2(1+{\frac{1}{2}}t_n)c^2\), which results in

Moreover, \(u_e(x_i,0)=I(x_i)\), \(\partial u_e/\partial t = V(x_i)\) at \(t=0\), and \(u_e(x_0,t)=u_e(x_{N_x},0)=0\). Also the modified scheme for the first time step is fulfilled by \(u_e(x_i,t_n)\).

Therefore, the exact solution \(u_e(x,t)=x(L-x)(1+t/2)\) of the PDE
problem is also an exact solution of the discrete problem. This means
that we know beforehand what numbers the numerical algorithm should
produce. We can use this fact to check that the computed \(u^n_i\)
values from an implementation equals \(u_e(x_i,t_n)\), within machine
precision. This result is valid *regardless of the mesh spacings*
\(\Delta x\) and \(\Delta t\)! Nevertheless, there might be stability
restrictions on \(\Delta x\) and \(\Delta t\), so the test can only be run
for a mesh that is compatible with the stability criterion (which in
the present case is \(C\leq 1\), to be derived later).

**Notice.**

A product of quadratic or linear expressions in the various independent variables, as shown above, will often fulfill both the PDE problem and the discrete equations, and can therefore be very useful solutions for verifying implementations.

However, for 1D wave equations of the type \(u_{tt}=c^2u_{xx}\) we shall see that there is always another much more powerful way of generating exact solutions (which consists in just setting \(C=1\) (!), as shown in the section Analysis of the difference equations).