Skip to article frontmatterSkip to article content

Numerical PDE II: Pseudo-Spectral Methods

Burgers’ Equation

In the previous lecture, we analyzed finite difference methods for the linear advection equation,

ut+cux=0,\begin{align} \frac{\partial u}{\partial t} + c \frac{\partial u}{\partial x} = 0, \end{align}

and studied their stability and accuracy using von Neumann analysis and modified equations.

Now, we turn to a nonlinear extension: the Burgers’ equation. By replacing the constant wave speed cc with the solution variable itself, the equation becomes

ut+uux=0.\begin{align} \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0. \end{align}

This is the inviscid Burgers’ equation, a canonical model for nonlinear advection.

# Parameters

l  = 1.0   # domain size
dt = 0.01  # time step

nx = 100   # number of spatial points
nt = 20    # number of time steps
import numpy as np

X, dx = np.linspace(0, l, nx, endpoint=False, retstep=True)  # spatial grid
U0    = np.sin(2*np.pi * X)  # initial condition
# Forward Time Centered Space (FTCS) scheme

def FTCS(U0, dx, dt, n):
    U = [U0]
    for _ in range(n):
        U0    = U[-1]
        sigma = U0 * dt / dx
        U1    = U0 - sigma * (np.roll(U0,-1) - np.roll(U0,1)) / 2
        U.append(U1)
    return np.array(U)
UFTCS = FTCS(U0, dx, dt, nt) # numerical solution
from matplotlib import pyplot as plt

plt.plot(X, U0,              label="Initial Condition")
plt.plot(X, UFTCS[10], '.-', label=f'$t$={10*dt}')
plt.plot(X, UFTCS[14], '.-', label=f'$t$={14*dt}')
plt.plot(X, UFTCS[18], '.-', label=f'$t$={18*dt}')
plt.xlabel('x')
plt.ylabel('u')
plt.legend()
from matplotlib.animation import ArtistAnimation
from IPython.display import HTML
from tqdm import tqdm

def animate(X, U):
    fig, ax = plt.subplots(1,1)
    ax.set_xlabel('x')
    ax.set_ylabel('y')

    frames = []
    for n in tqdm(range(len(U))):
        f = ax.plot(X, U[n], 'C0.-', animated=True)
        frames.append(f)
        plt.close()
    
    return ArtistAnimation(fig, frames, interval=40)
anim = animate(X, UFTCS)

HTML(anim.to_html5_video())  # display animation
# anim.save('FTCS.mp4')        # save animation
# HANDSON: Plot other snapshots and study the oscillations.
# HANDSON: Change the parameters and initial conditions.

The Burgers’ equation is important because:

  • It demonstrates wave steepening: smooth initial profiles evolve into discontinuous shocks.

  • It highlights the limitations of finite difference schemes: dispersion and oscillations arise near shocks.

As with the linear advection equation, the FTCS scheme is unconditionally for Burgers’ equation. Even with very small time steps, oscillations grow and the solution eventually blows up.

The numerical solution initially appears to evolve correctly. However, once the shock starts to form, the numerical solution quickly develops high-frequency oscillations near the shock. These oscillations grow exponentially in time, destroying the solution.

There is no convergence---the method is unstable.

# HANDSON: modify the upwind scheme from lecture 9 to solve the
#          Burgers' equation

def upwind(U0, dx, dt, n):
    U = [U0]
    for _ in range(n):
        U0    = U[-1]
        sigma = U0 * dt / dx
        U1    = U0 - sigma * np.where(
            sigma > 0,
            U0 - np.roll(U0,1),
            np.roll(U0,-1) - U0,
        )
        U.append(U1)
    return np.array(U)
dt = 0.01  # time step
nt = 100   # number of time steps

Uupwind = upwind(U0, dx, dt, nt) # numerical solution
plt.plot(X, U0,                label="Initial Condition")
plt.plot(X, UFTCS[18],   '.-', label="FTCS")
plt.plot(X, Uupwind[18], '.-', label="Upwind")
plt.xlabel('x')
plt.ylabel('u')
plt.legend()
anim = animate(X, Uupwind)

HTML(anim.to_html5_video())  # display animation
# anim.save('upwind.mp4')      # save animation
# HANDSON: Plot other snapshots and study the oscillations.
# HANDSON: Change the parameters and initial conditions.
#          Specifically, try out dt > 0.01.

For the upwind method, the maximum dt that support stable solution is limited by the Courant-Friedrichs-Lewy condition, i.e., at each grid point, we require the Courant number σ=uΔt/Δx1\sigma = u \Delta t/\Delta x \leq 1.

The scheme captures shocks robustly and monotonically. The numerical viscosity (implicit by the numerical method) spreads shocks over several grid cells. However, the smooth regions damp out over time due to numerical diffusion.

The convergence rate is first-order everywhere.

Next, let’s explore the Lax-Wendroff Scheme.

# Lax-Wendroff scheme

def LW(U0, dx, dt, n):
    U = [U0]
    for _ in range(n):
        U0    = U[-1]
        sigma = U0 * dt / dx
        U1    = U0 - sigma       * (np.roll(U0,-1)        - np.roll(U0,1)) / 2 \
                   + sigma*sigma * (np.roll(U0,-1) - 2*U0 + np.roll(U0,1)) / 2
        U.append(U1)
    return np.array(U)
dt = 0.01  # time step
nt = 100   # number of time steps

ULW = LW(U0, dx, dt, nt) # numerical solution
plt.plot(X, U0,                label="Initial Condition")
plt.plot(X, UFTCS[18],   '.-', label="FTCS")
plt.plot(X, Uupwind[18], '.-', label="Upwind")
plt.plot(X, ULW[18],     '.-', label="Lax-Wendroff")
plt.xlabel('x')
plt.ylabel('u')
plt.legend()
anim = animate(X, ULW)

HTML(anim.to_html5_video())  # display animation
# anim.save('LW.mp4')      # save animation

The Lax-Wendroff scheme, similar to upwind scheme, is stable under CFL condition.

The solution has excellent accuracy for smooth solutions. However, it has Gibbs-like oscillations appear at shocks. It is non-monotone but conservative.

The convergence rate is second-order for smooth flows, first-order for discontinuous ones.

Weak Solutions

Inviscid Burgers’ equation develops discontinuities (shocks) in finite time, even from smooth initial conditions. For example, an initial sine wave we saw earlier steepens until the slope blows up.

This leads to a breakdown of classical solutions. Namely, the PDE cannot be satisfied pointwise after a discontinuity forms. To make sense of solutions beyond shock formation, we introduce the concept of a weak solution.

Multiply a PDE by a smooth test function ϕ(x,t)\phi(x,t), integrate over space and time, and integrate by parts:

00(uϕt+12u2ϕx)dxdt=0.\begin{align} \int_0^\infty \int_{0}^\ell \left(u\frac{\partial\phi}{\partial t} + \frac{1}{2}u^2\frac{\partial\phi}{\partial x}\right) dx dt = 0. \end{align}

The solution satisfying the above equation is called a weak solution. Note that this definition still makes sense if uu is discontinuous. Thus, weak solutions allow shocks.

Not all weak solutions are physically relevant. The entropy condition rules out unphysical “expansion shocks”. For Burgers, this requires that characteristics enter the shock (no information leaves it).

Physical solutions must be dissipative: they do not create new information. Mathematically, this is enforced by requiring that solutions satisfy an entropy inequality in addition to the PDE.

For Burgers’ equation:

  • Define an entropy function η(u)=u2/2\eta(u) = u^2/2.

  • Associated entropy flux: q(u)=u3/3q(u) = u^3/3.

The entropy condition requires

η(u)t+q(u)x0,\begin{align} \eta(u)_t + q(u)_x \le 0, \end{align}

in the weak sense.

This inequality expresses that entropy should not decrease (no creation of spurious “order”). For the Burgers’ equation (and many fluid dynamic equations), it rules out expansion shocks, where characteristics diverge from the shock. Instead, it selects compressive shocks, where characteristics converge into the shock.

Spectral Methods

In the previous section, we introduced the concept of weak solutions to handle shocks in Burgers’ equation. The weak formulation required us to test the PDE against smooth functions and integrate by parts, which naturally shifted the focus from pointwise derivatives to integral relations over function spaces.

This perspective leads directly to spectral methods.

In spectral methods, both the solution uu and the test functions vv are represented as expansions in orthogonal basis functions (such as Fourier modes or orthogonal polynomials). By projecting the PDE onto these basis functions, we obtain a system of algebraic equations for the expansion coefficients.

Spectral methods are a powerful class of numerical techniques for solving PDEs by representing the solution as a global expansion in orthogonal basis functions. Unlike finite difference or finite volume methods (next lecture), which approximate derivatives locally on stencils, spectral methods approximate the entire function globally.

For smooth solutions, this approach achieves spectral accuracy. The error decreases faster than any power of the grid spacing, often exponentially, as the number of modes increases.

The advantages of using spectral methods include:

  1. High Accuracy (Spectral Convergence):

    • For smooth problems, spectral methods can achieve exponential convergence.

    • A small number of modes can resolve very fine details compared to high-resolution finite difference methods.

  2. Efficient Computation:

    • Differentiation and integration become algebraic operations in spectral space.

    • With Fast Fourier Transforms (FFT), these operations are O(NlogN)\mathcal{O}(N \log N), making them practical for large problems.

  3. Conservation Properties:

    • With Galerkin truncation, spectral methods naturally conserve invariants such as energy and enstrophy.

    • This is especially important in fluid dynamics and turbulence modeling.

  4. Applications in Science and Engineering:

    • Astrophysical fluid dynamics.

    • Weather and ocean modeling.

    • Turbulence simulations.

    • Quantum mechanics and wave propagation.

Core Ideas

We approximate a function u(x)u(x) by a truncated series of orthogonal basis functions. For example, in the Fourier spectral method on a periodic domain:

u(x)k=N/2N/2u^keikx,\begin{align} u(x) \approx \sum_{k=-N/2}^{N/2} \hat{u}_k e^{ikx}, \end{align}

where u^k\hat{u}_k are Fourier coefficients.

Differentiation is exact in spectral space:

ux=k=N/2N/2iku^keikx.\begin{align} \frac{\partial u}{\partial x} = \sum_{k=-N/2}^{N/2} ik\hat{u}_k e^{ikx}. \end{align}

Thus, spectral methods can be seen as a natural extension of high-order finite differences: instead of improving accuracy with wider local stencils, we represent the entire solution in a global basis where differentiation is exact.

Periodic vs. Non-Periodic Domains

  • Periodic problems: Fourier modes eikxe^{ikx} form the natural orthogonal basis.

  • Non-periodic problems: Chebyshev or Legendre polynomials are often used, maintaining spectral accuracy while accommodating boundary conditions.

Strengths and Limitations

  • Strengths:

    • Exponential convergence for smooth problems.

    • Captures global solution features with fewer degrees of freedom.

    • Efficient with FFT-based implementations.

  • Limitations:

    • Require smoothness: discontinuities produce oscillations (Gibbs phenomenon).

    • More complex for non-periodic geometries.

    • Global coupling: each mode influences the entire domain, which can make very large systems computationally heavy.

Spectral Galerkin vs. Pseudo-Spectral Methods

Spectral methods come in two closely related formulations. Both begin by expanding the solution in a truncated set of orthogonal basis functions.

The Spectral Galerkin Method start with the weak form of the PDE. The governing equations are multiplied by a test function (from the same basis set) and integrate over the domain. The residual is then required to vanish in this weak sense:

utvdx+f(u)xvdx=0.\begin{align} \int \frac{\partial u}{\partial t} v dx + \int \frac{\partial f(u)}{\partial x} v dx = 0. \end{align}

With the Fourier basis on a periodic domain, this reduces to an exact algebraic system for the modal coefficients u^k\hat{u}_k. This approach is mathematically elegant and highlights the connection to variational formulations.

The Pseudo-Spectral Method, on the other hand, compute all derivatives in Fourier space, but evaluate nonlinear terms directly in physical space.

The algorithm is very straightforward:

  1. Transform function to Fourier space using FFT.

  2. Compute derivatives in Fourier space.

  3. Inverse transform the derivatives back to physical space.

  4. Compute all nonlinear terms, e.g., uu/xu\partial u/\partial x in physical space.

  5. Advance grid points in time.

This avoids expensive convolutions in Fourier space, since nonlinear products are simple in real space. The computation complexity is O(NlogN)\mathcal{O}(N \log N) with FFT.

In this lecture, we will adopt the pseudo-spectral method to solve the Burgers’ equation.

Pseudo-Spectral Method for Burgers’ Equation (Method of Lines)

We now apply the pseudo-spectral method to the inviscous Burgers’ equation:

ut+uux=0,x[0,2π],u(x,0)=u0(x).\begin{align} \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0, \qquad x \in [0, 2\pi], \quad u(x,0) = u_0(x). \end{align}

Represent the solution as a truncated Fourier series:

u(x,t)k=N/2N/21u^k(t)eikx,\begin{align} u(x,t) \approx \sum_{k=-N/2}^{N/2-1} \hat{u}_k(t) e^{ikx}, \end{align}

where kk are integer wavenumbers and u^k(t)\hat{u}_k(t) are the time-dependent Fourier coefficients.

Differentiation is exact in Fourier space:

uxiku^k.\begin{align} \frac{\partial u}{\partial x} \longleftrightarrow ik\hat{u}_k. \end{align}

Once the derivatives are obtain, each grid point simply follow a system of ODEs. We integrate this ODE system with a high-order time-stepping scheme such as Runge-Kutta 4 (RK4).

# Parameters

l  = 1.0   # domain size
dt = 0.01  # time step

nx = 100   # number of spatial points
nt = 31    # number of time steps
# Setup the grid in both spatial and spectral domains

from numpy.fft import fftfreq

X, dx = np.linspace(0, l, nx, endpoint=False, retstep=True)  # spatial grid
K     = 2*np.pi * fftfreq(nx, d=l/nx)  # spectral "wavenumber" grid 

U0    = np.sin(2*np.pi * X)  # initial condition
# Implement spectral derivative

from numpy.fft import fft, ifft

def ddx(u):
    U    = fft(u)
    ikU  = 1j * K * U
    dudx = ifft(ikU)
    return dudx.real
# Test the spectral derivative

dU0dxref = 2*np.pi * np.cos(2*np.pi*X)
dU0dx    = ddx(U0)

plt.plot(X, dU0dxref)
plt.plot(X, dU0dx, '--')

print(np.max(abs(dU0dx-dU0dxref)))
# Copy our previous RK4 implement

def RK4(f, u0, t0, dt, n):
    U = [np.array(u0)]
    T = [np.array(t0)]
    for _ in range(n):
        k1 = f(U[-1]                )
        k2 = f(U[-1] + 0.5 * dt * k1)
        k3 = f(U[-1] + 0.5 * dt * k2)
        k4 = f(U[-1] +       dt * k3)
        U.append(U[-1] + dt * (k1/6 + k2/3 + k3/3 + k4/6))
        T.append(T[-1] + dt)
    return np.array(U), np.array(T)
# The right hand side of the Burgers' equation

def rhs(u):
    return - u * ddx(u)
# That's it!
# Here is a pseudo-spectral implementation of the Burgers' equation

Uspec, Tspec = RK4(rhs, U0, 0, dt, nt)
# Plot the result

plt.plot(Uspec[::4].T)
plt.xlabel(r"$x$")
plt.ylabel(r"$u$")
# Animate the result

anim = animate(X, Uspec)

HTML(anim.to_html5_video())  # display animation
# anim.save('spec.mp4')        # save animation
# HANDSON: Plot other snapshots and study the oscillations.
# HANDSON: Change the parameters and initial conditions.
#          Specifically, try out dt > 0.01.
# HANDSON: Let's also inspect the energy as function of time.

We have previously learn about the CFL condition. When uδt/δx>1u \delta t/\delta x > 1, finite difference scheme becomes unstable. The same hold for spectral method. One speculation is that when the Gibbs phenomena start to kick in, some uin|u_i^n| becomes larger than 1. And Δt=0.01\Delta t = 0.01 is no longer small enough to keep the pseudo-spectral method stable.

Let’s update our RK4() method to include adaptive time stepping.

# HANDSON: Adaptive RK4 implement

def RK4adap(f, u0, t0, CFL, t1):
    U = [np.array(u0)]
    T = [np.array(t0)]

    while T[-1] < t1:
        dt = CFL * dx / (np.max(abs(U[-1])) + 1e-6)
        k1 = f(U[-1]                )
        k2 = f(U[-1] + 0.5 * dt * k1)
        k3 = f(U[-1] + 0.5 * dt * k2)
        k4 = f(U[-1] +       dt * k3)
        U.append(U[-1] + dt * (k1/6 + k2/3 + k3/3 + k4/6))
        T.append(T[-1] + dt)
    return np.array(U), np.array(T)
Uadap, Tadap = RK4adap(rhs, U0, 0, 0.9, 0.3)
plt.plot(Uadap[::8].T)
plt.xlabel(r"$x$")
plt.ylabel(r"$u$")
anim = animate(X, Uadap)

HTML(anim.to_html5_video())  # display animation
# anim.save('adap.mp4')        # save animation
# HANDSON: Let's inspect the energy as function of time.

Adaptive time stepping that enforces a CFL bound is not enough to prevent blow-ups in pseudo-spectral schemes for nonlinear PDEs. We are seeing the same numerical problem as the pioneers in numerical fluid dynamics and weather forecasts.

In the early spectral turbulence simulations, it was observed that nonlinear terms produced catastrophic pile-ups at high wavenumbers.

The reason is aliasing: spurious transfer of energy between Fourier modes caused by evaluating nonlinear products (like u2u^2) on a grid with finite spectral bandwidth.

Aliasing is a spatial error; making Δt\Delta t small cannot remove it. If left unchecked, aliasing can inject energy into the highest resolved modes, triggering nonphysical oscillations and eventual instability.

There are two standard dealiasing strategies:

  1. 2/3-Rule Truncation (Orszag, early 1970s)

    • Keep only k23kmax|k| \le \frac{2}{3}k_{\max} whenever you form nonlinear terms.

    • Zero (truncate) the highest 1/3 modes before/after nonlinear products so that triad interactions among retained modes cannot alias back into the band.

    • Pros: simple, fast.

    • Cons: removes some bandwidth (effective resolution reduced).

  2. 3/2-Rule Zero-Padding (Patterson-Orszag)

    • Temporarily zero-pad Fourier coefficients to a grid of size Npad=32NN_\text{pad} = \frac{3}{2}N.

    • Transform to real space on the padded grid, compute products (e.g., u2u^2), transform back, and truncate to the original NN modes.

    • Pros: retains more effective bandwidth; cleaner than hard truncation.

    • Cons: O(NlogN)\sim\mathcal{O}(N\log N) with a larger constant due to padding.

These two strategies are mathematically identical. They differ in the software implementation.

We will use the 2/3-Rule because it fits more naturally to our implementation of pseudo-spectral methods.

def twothird(u):
    l = len(u) // 3
    U = fft(u)
    U[l:-l+1] = 0
    return ifft(U).real
def rhsdeal(u):
    u = twothird(u)
    return - u * ddx(u)
Udeal, Tdeal = RK4adap(rhsdeal, twothird(U0), 0, 0.9, 0.3)
plt.plot(Udeal[::8].T)
plt.xlabel(r"$x$")
plt.ylabel(r"$u$")
anim = animate(X, Udeal)

HTML(anim.to_html5_video())  # display animation
# anim.save('adap.mp4')        # save animation
# HANDSON: Let's inspect the energy as function of time.

Applying dealiasing successfully removes the catastrophic aliasing instability. However, high-frequency oscillations (Gibbs phenomenon) still appear once the solution develops sharp gradients or discontinuities. This reflects a fundamental limitation: Fourier spectral methods assume smoothness. When the solution is not smooth (as in inviscid Burgers with shocks), oscillations persist regardless of dealiasing.

Viscosity

A natural resolution is to add a (physical) viscosity term to Burgers’ equation:

ut+uux=ν2ux2,ν>0.\begin{align} \frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2}, \qquad \nu > 0. \end{align}

In the viscous Burgers’ equation, the diffusion term ν2u/x2\nu \partial^2 u/\partial x^2 damps high-wavenumber modes (νk2u^k-\nu k^2 \hat u_k in Fourier space). I.e., Energy is dissipated naturally at small scales. This selectively smooths out oscillations while retaining nonlinear dynamics. As ν0+\nu \to 0^+, the viscous solution converges to the entropy-satisfying weak solution of the inviscid Burgers equation.

def ddx_d2dx2(u):
    U      = fft(u)
    ikU    = 1j * K * U
    mkkU   = -K * K * U
    dudx   = ifft(ikU)
    d2udx2 = ifft(mkkU)
    return dudx.real, d2udx2.real
def rhsvisc(u, nu=2e-2):
    u = twothird(u)
    dudx, d2udx2 = ddx_d2dx2(u)
    return - u * dudx + nu * d2udx2
Uvisc, Tvisc = RK4adap(rhsvisc, twothird(U0), 0, 0.25, 0.3)
plt.plot(Uvisc[::8].T)
plt.xlabel(r"$x$")
plt.ylabel(r"$u$")
anim = animate(X, Uvisc)

HTML(anim.to_html5_video())  # display animation
# anim.save('visc.mp4')        # save animation
# HANDSON: Change the parameters and initial conditions.
#          Specifically, change the CFL condition and time step.
# HANDSON: Let's inspect the energy as function of time.
# HANDSON: What if we do not enable the 2/3-Rule?

When solving viscous Burgers’ equation numerically, the time step must respect both:

  • the advection constraint (from the nonlinear term uu/xu\partial u/\partial x), and

  • the diffusion constraint (from the viscous term ν2u/x2\nu\partial^2 u/\partial x^2).

From the hyperbolic part u/t+uu/x=0\partial u/\partial t + u \partial u/\partial x = 0, information propagates along characteristics with speed u|u|. To ensure stability, the CFL condition requires

ΔtCadvΔxmaxu,\begin{align} \Delta t \le C_{\text{adv}} \frac{\Delta x}{\max|u|}, \end{align}

where Cadv1C_{\text{adv}} \leq 1 is a safety constant (typically 0.5 for RK4).

From the parabolic part νu/x2\nu\partial u/\partial x^2, the fastest time scale is set by the highest resolved mode. For an explicit scheme, stability requires

ΔtCdiffΔx2ν,\begin{align} \Delta t \le C_{\text{diff}} \frac{\Delta x^2}{\nu}, \end{align}

with CdiffC_{\text{diff}} depending on the time integrator (typically 0.5 for RK4).

In practice, we take the smaller of the two limits. If we choose only one constant, we can use the expression

ΔtCΔxmaxu+ν/Δx.\begin{align} \Delta t \le C \frac{\Delta x}{\max|u| + \nu/\Delta x}. \end{align}

Thus:

  • For small viscosity (ν0\nu \to 0), the advection constraint dominates.

  • For large viscosity, the diffusion constraint dominates.M

# HANDSON: Adaptive RK4 taking into accout viscous time scale

def RK4visc(f, u0, t0, CFL, t1, nu=2e-2):
    U = [np.array(u0)]
    T = [np.array(t0)]

    while T[-1] < t1:
        dt = CFL * dx / (np.max(abs(U[-1])) + nu/dx + 1e-6)
        k1 = f(U[-1]                )
        k2 = f(U[-1] + 0.5 * dt * k1)
        k3 = f(U[-1] + 0.5 * dt * k2)
        k4 = f(U[-1] +       dt * k3)
        U.append(U[-1] + dt * (k1/6 + k2/3 + k3/3 + k4/6))
        T.append(T[-1] + dt)
    return np.array(U), np.array(T)
nu = 2e-2
Uvisc2, Tvisc2 = RK4visc(lambda u: rhsvisc(u, nu), twothird(U0), 0, 0.5, 1, nu)
plt.plot(Uvisc2[::8].T)
plt.xlabel(r"$x$")
plt.ylabel(r"$u$")
anim = animate(X, Uvisc2)

HTML(anim.to_html5_video())  # display animation
# anim.save('visc2.mp4')       # save animation
# HANDSON: Change the parameters and initial conditions.
#          Specifically, change the CFL condition and time step.
# HANDSON: Let's inspect the energy as function of time.
# HANDSON: What if we do not enable the 2/3-Rule?

In our pseudo-spectral solver, we currently integrate both the nonlinear advection term uu/xu\partial u/\partial x and the viscous diffusion term ν2u/x2\nu\partial^2 u/\partial x^2 explicitly in time. This requires the timestep to satisfy both the advection and diffusion CFL conditions, as discussed above. When viscosity is large or spatial grid is small, the diffusion constraint can be very restrictive, forcing Δt\Delta t to be unnecessarily small.

One idea to overcome this limitation is use semi-implicit methods. The viscous term is linear and diagonal in Fourier space:

ν2ux2νk2u^k.\begin{align} \nu\frac{\partial^2 u}{\partial x^2} \longleftrightarrow -\nu k^2 \hat{u}_k. \end{align}

This makes it is natural to handle diffusion implicitly, especially in spectral Galerkin methods, while still treating the nonlinear advection term explicitly.

This approach is called a semi-implicit scheme or IMEX scheme (Implicit–Explicit).