Posted Saturday October 28 2023.

Suppose you want to solve a **linear partial differential equation** of order *m* in *n* variables: for example when *n* = 3 and *m* = 2 you might be considering the wave equation in 2 dimensions:

$$
\frac{\partial^2 u}{\partial t^2} - \frac{\partial^2 u}{\partial x^2} - \frac{\partial^2 u}{\partial y^2} = 0
$$

This equation can be written as *L*[*u*] = 0, where *L* is the linear differential operator *L*[*u*] = ∂_{t}^{2}[*u*] − ∂_{x}^{2}[*u*] − ∂_{y}^{2}[*u*]. (Yes, *L* is a polynomial over ∂_{t}, ∂_{x}, ∂_{y}, and yes, it is order 2 because this polynomial has degree 2.) In general, given some open set *U* ⊆ ℝ^{n} and linear differential operator *L* : *C*^{m}(*U*, ℝ) → *C*^{0}(*U*, ℝ), you want to find solutions *u* ∈ *C*^{m}(*U*, ℝ) such that *L*[*u*] = 0. Because *L* is linear, the set of all solutions *u* ∈ *C*^{m}(*U*, ℝ) such that *L*[*u*] = 0 is itself a vector space called the kernel (or null space) or *L*. In notation, if *S* is the set of solutions to the equation *L*[*u*] = 0, then *S* = *k**e**r**n**e**l*(*L*) = {*u* ∈ *C*^{m}(*U*, ℝ) ∣ *L*[*u*] = 0}. It’s not hard to check these solutions form a vector space: try it! (solution: because *L* is linear, for any *u*, *u*′ ∈ *S*, *L*[*u* + *u*′] = *L*[*u*] + *L*[*u*′] = 0 + 0 = 0 ⟹ *u* + *u*′ ∈ *S* and for any *u* ∈ *S* and *c* ∈ ℝ, *L*[*c* ⋅ *u*] = *c* ⋅ *L*[*u*] = *c* ⋅ 0 = 0 ⟹ *c* ⋅ *u* ∈ *S*. And don’t forget: 0 ∈ *S* because *L*[0] = 0.)

To pose an initial value problem (IVP) you need one of the dimensions to be time - say it’s the first dimension. Now let *D* = {(0, *x*) ∣ *x* ∈ ℝ^{n − 1}} be the subset of the domain where *t* = 0. Given some initial data *f* : *D* → ℝ, the initial condition becomes *u*|_{D} = *f*, so the IVP is *L*[*u*] = 0, *u*|_{D} = *f*.

We can write the solution space of this problem as *S* = *k**e**r**n**e**l*(*L*) ∩ *R*(*D*, *f*), where *R*(*D*, *f*) = {*u* ∣ *u*|_{D} = *f*} (R is for restriction). *S* is not a vector space, because if *u*, *u*′ ∈ *S* are solutions, then (*u* + *u*′)|_{D} = *u*|_{D} + *u*′|_{D} = *f* + *f* = 2*f*, which is not equal to *f* unless *f* = 0. What we have instead is that *S* is an affine space over *k**e**r**n**e**l*(*L*) ∩ *R*(*D*, 0). This follows from the more general argument that *R*(*D*, *f*) is an affine space over *R*(*D*, 0) - try proving it! (solution: if *u* ∈ *R*(*D*, *f*) and *v* ∈ *R*(*D*, 0) then (*u* + *v*)|_{D} = *u*|_{D} + *v*|_{D} = *f* + 0 = *f*.)

An initial boundary value problem (IBVP) in *n* variables might take the form *L*[*u*] = 0, *u*|_{t = 0} = *f*, *u*|_{x = 0} = *g*

where (*x*, *t*) ∈ ℝ^{n − 1} × (0, ∞), *f* : ℝ^{n − 1} → ℝ, and *g* : (0, ∞) → ℝ. (For an example take Shearer & Levy 4.2.2.) So, we have some initial condition *f* along with some forcing term *g* which is applied at *x* = 0 for all time *t*. We want to find a solution *u* ∈ *k**e**r**n**e**l*(*L*) ∩ *R*(*D*_{1}, *f*) ∩ *R*(*D*_{2}, *g*), where *D*_{1} = ℝ^{n − 1} and *D*_{2} = (0, ∞).

Sometimes, as in the case of the wave equation, splitting the problem into two other IBVPs leads to a solution. We consider the IBVPs *L*[*u*] = 0, *u*|_{t = 0} = 0, *u*|_{x = 0} = *g*

and *L*[*u*] = 0, *u*|_{t = 0} = *f*, *u*|_{x = 0} = 0

In the first, the initial condition is replaced with zero, and in the second the boundary condition is replaced with zero.

We can show if *u*_{1} is a solution to the first and *u*_{2} is a solution to the second, it follows that *u*_{1} + *u*_{2} is a solution to the original IBVP, i.e. it belongs to the solution space *S* = *k**e**r**n**e**l*(*L*) ∩ *R*(*D*_{1}, *f*) ∩ *R*(*D*_{2}, *g*) where *D*_{1} = ℝ^{n − 1} and *D*_{2} = (0, ∞). If *u*_{1} ∈ *k**e**r**n**e**l*(*L*) ∩ *R*(*D*_{1}, *f*) ∩ *R*(*D*_{2}, 0) and *u*_{2} ∈ *k**e**r**n**e**l*(*L*) ∩ *R*(*D*_{1}, 0) ∩ *R*(*D*_{2}, *g*), then

*u*_{1}+*u*_{2}∈*k**e**r**n**e**l*(*L*) since*u*_{1},*u*_{2}∈*k**e**r**n**e**l*(*L*) and it is a vector space,*u*_{1}+*u*_{2}∈*R*(*D*_{1},*f*) since*u*_{1}∈*R*(*D*_{1},*f*),*u*_{2}∈*R*(*D*_{1}, 0), and*R*(*D*_{1},*f*) is an affine space over*R*(*D*_{1}, 0),*u*_{1}+*u*_{2}∈*R*(*D*_{2},*g*) since*u*_{1}∈*R*(*D*_{2}, 0),*u*_{2}∈*R*(*D*_{2},*g*), and*R*(*D*_{2},*g*) is an affine space over*R*(*D*_{2}, 0),

Thus *u*_{1} + *u*_{2} ∈ *S*, so it solves the IBVP.

Appendix: seems like we are using the following theorem: if {(*D*_{i}, *f*_{i})}, *i* ∈ *I*, is a (finite) set of pairs where each *D*_{i} ⊆ *U* and *f* : *D*_{i} → ℝ, and {*u*_{i}}_{i ∈ I} ⊆ *C*(*U*, ℝ) satisfies *u*_{i} ∈ *R*(*D*_{i}, *f*_{i}) ∩ ⋂_{j ≠ i}*R*(*D*_{j}, 0) for all *i*, then ∑_{i}*u*_{i} ∈ ⋂_{i}*R*(*D*_{i}, *f*_{i}).

But it’s just a special case of the more general fact about affine spaces: if {*X*_{i}}, *i* ∈ *I* is a (presumably finite) collection of subspaces of some vector space *X*, and {*A*_{i}} is a collection of affine spaces, each *A*_{i} affine over *X*_{i}, and if {*a*_{i}} is a collection such that *a*_{i} ∈ *A*_{i} and *a*_{i} ∈ *X*_{j} for all *j* ≠ *i*, and *a* = ∑_{i}*a*_{i}, then *a* ∈ *A*_{i} for all *i*.