4.3. Orthogonal systems

$\renewcommand{\Re}{\operatorname{Re}}$ $\renewcommand{\Im}{\operatorname{Im}}$ $\newcommand{\erf}{\operatorname{erf}}$ $\newcommand{\dag}{\dagger}$ $\newcommand{\const}{\mathrm{const}}$ $\newcommand{\arcsinh}{\operatorname{arcsinh}}$

4.3. Orthogonal systems


  1. Examples
  2. Abstract orthogonal systems: definition
  3. Orthogonal systems: approximation
  4. Orthogonal systems: approximation. II
  5. Orthogonal systems: completeness

Examples

All systems we considered in the previous Section were orthogonal i.e. \begin{equation} (X_n, X_m)=0\qquad \forall m\ne n \label{eq-4.3.1} \end{equation} with \begin{equation} (X,Y):=\int_0^l X(x)\bar{Y} (x)\,dx,\qquad \|X\|^2:=(X,X). \label{eq-4.3.2} \end{equation} where $\bar{Y}$ means complex-conjugate to $Y$.

Exercise 1. Prove it by direct calculation.

However, instead we show that this nice property (and the fact that eigenvalues are real) is due to the fact that those problems are self-adjoint.

Consider $X,Y$ satisfying Robin boundary conditions \begin{align} &X'(0)-\alpha X(0)=0,\label{eq-4.3.3}\\ &X'(l)+\beta X(l)=0\label{eq-4.3.4} \end{align} with $\alpha,\beta\in \mathbb{R}$ (and $Y$ satisfies the same conditions). Note that \begin{align} (X'',Y)=&\int X''(x)\bar{Y}(x)\,dx \notag\\ =&-\int X'(x)\bar{Y}'(x)\,dx + X' (l)\bar{Y}(l)- X' (0)\bar{Y}(0)\notag \\ =&-(X',Y') -\beta X (l)\bar{Y}(l)-\alpha X(0)\bar{Y}(0).\qquad \label{eq-4.3.5} \end{align} Therefore if we plug $Y=X\ne 0$ an eigenfunction, $X''+\lambda X=0$, satisfying conditions (\ref{eq-4.3.3}) and (\ref{eq-4.3.4}), we get $-\lambda \|X\|^2$ in the left-hand expression (with obviously real $\|X\|^2\ne 0$) and also we get the real right expression (since $\alpha,\beta\in \mathbb{R}$); so $\lambda$ must be real: all eigenvalues are real.

Further, for $(X,Y'')$ we obtain the same equality albeit with $\alpha,\beta$ replaced by their complex conjugate $\bar{\alpha},\bar{\beta}$ \begin{gather*} (X,Y'')=\int X(x)\bar{Y}''(x)\,dx = -(X',Y') -\bar{\beta} X (l)\bar{Y}(l)-\bar{\alpha} X(0)\bar{Y}(0) \end{gather*} and therefore due to assumption $\alpha,\beta\in \mathbb{R}$ \begin{equation} (X'',Y)= (X,Y''). \label{eq-4.3.6} \end{equation} But then if $X,Y$ are eigenfunctions corresponding to different eigenvalues $\lambda$ and $\mu$ we get from (\ref{eq-4.3.6}) that $-\lambda(X,Y)=-\mu (X,Y)$ and $(X,Y)=0$ due to $\lambda\ne \mu$.

Remark 1. For periodic boundary conditions we cannot apply these arguments to prove that $\cos(2\pi nx/l)$ and $\sin(2\pi nx/l)$ are orthogonal since they correspond to the same eigenvalue; we need to prove it directly.

Remark 2.

  1. So, we observed that for the operator $A: X\to -X''$ with the domain $\mathfrak{D}(A)={X\in L^2([0,l]):\, AX\in L^2([0,l]),\, X(0)=X(l)=0}$, where $L^p(J)$ denotes the space of functions with the finite norm $|X|_{L^p(J)}=(\int_J |X|^p\,dx)^{1/p}$, has the property $(AX,Y)=(X,AY)$. But this is not a self-adjointness, only a symmetry.
  2. Self-adjoinness means that $A^*=A$, and this means that the operators $A^*$ and $A$ have the same domain, which means that the domain of $A^*$ is not wider than the domain of $A$. And one can see easily, that in order for $(AX,Y)=(X,A^*Y)$ be valid for all $X$ with $X(0)=X(l)=0$, exactly the same conditions should be imposed on $Y$: $Y(0)=Y(l)=0$.
  3. On the other hand, the same operator $A: X\to -X''$ with the domain $\mathfrak{D}(A)={X\in L^2([0,l]):\, AX\in L^2([0,l]),\, X(0)=X'(0)=X(l)=0}$ is only symmetric but not self-adjoint because the equality $(AX,Y)=(X,A^*Y)$ holds for all such $X$, and for all $Y$, such that $Y(l)=0$, which means that
  4. \begin{gather*} \mathfrak{D}(A^*)=\{Y\in L^2([0,l]):\, AY\in L^2([0,l]),\, Y(l)=0\}, \end{gather*} i.e. domain of $A^*$ is larger . And one can check that for such operator $A$ there are no eigenvalues and eigenfunctions at all!

So,

  1. Operator $A$ is called symmetric, if $A^*\supset A$, i.e. $A^*$ has the same or larger domain but on domain of $A$, it coincides with $A$.
  2. Operator $A$ is called self-adjoint, if $A^*= A$, i.e. if both $A$ and $A^*$ have the same domain.

We need to discuss domain and the difference between symmetric and self-adjoint because operator $X\mapsto -X'$ is \alert{unbounded}.

Abstract orthogonal systems: definition

Consider linear space $\mathsf{H}$, real or complex. From the linear algebra course's standard definition:

  1. $u+v=v+u\qquad \forall u,v\in \mathsf{H}$;
  2. $(u+v)+w=u+(v+w)\qquad \forall u,v,w\in \mathsf{H}$;
  3. $\exists 0\in \mathsf{H}: \ 0+u=u\qquad \forall u\in \mathsf{H}$;
  4. $\forall u\in \mathsf{H}\ \exists (-u): u+(-u)=0 $;
  5. $\alpha(u+v)=\alpha u+ \alpha v \qquad \forall u,v\in \mathsf{H}\quad \forall\alpha \in \mathbb{R}$;
  6. $(\alpha+\beta)u=\alpha u+ \beta u \qquad \forall u\in \mathsf{H}\quad\forall\alpha,\beta \in \mathbb{C}$;
  7. $\alpha(\beta u)=(\alpha \beta)u \qquad \forall u\in \mathsf{H}\quad \forall\alpha,\beta \in \mathbb{R}$;
  8. $1u=u\qquad \forall u \in \mathsf{H}$.

For complex linear space replace $\mathbb{R}$ by $\mathbb{C}$.

Assume that on $\mathsf{H}$ inner product is defined:

  1. $(u+v,w)=(u,w)+(v,w)\qquad \forall u,v,w\in \mathsf{H}$;
  2. $(\alpha u,v)=\alpha (u,v) \qquad\forall u,v\in \mathsf{H} \quad \forall \alpha\in \mathbb{R}$;
  3. $(u,v)=\overline{(v,u)} \qquad \forall u,v\in \mathsf{H}$;
  4. $\|u\|^2:=(u,u)\ge 0 \qquad \forall u\in \mathsf{H}$ (it implies that it is real--if we consider complex spaces) and $\|u\|=0 \iff u=0$.

Definition 1.

  1. Finite dimensional real linear space with an inner product is called Euclidean space.
  2. Finite dimensional complex linear space with an inner product is called Hermitian space.
  3. Infinite dimensional linear space (real or complex) with an inner product is called pre-Hilbert space.

For Hilbert space we will need another property (completeness) which we add later.

Definition 2.

  1. System $\{u_n\}$, $0\ne u_n\in \mathsf{H}$ (finite or infinite) is orthogonal if $(u_m,u_n)=0$ $\forall m\ne n$;
  2. Orthogonal system is orthonormal if $\|u_n\|=1$ $\forall n$, i.e. $(u_m,u_n)=\delta_{mn}$ -- Kronecker symbol.

Orthogonal systems: approximation

Consider a finite orthogonal system $\{u_n\}$. Let $\mathsf{K}$ be its linear hull: the set of linear combinations $\sum_n \alpha_nu_n$. Obviously $\mathsf{K}$ is a linear subspace of $\mathsf{H}$. Let $v\in \mathsf{H}$ and we try to find the best approximation of $v$ by elements of $\mathsf{K}$, i.e. we are looking for $w\in \mathsf{K}$ s.t. $\|v-w\|$ minimal.

Theorem 1.

  1. There exists an unique minimizer;
  2. This minimizer is an orthogonal projection of $f$ to $\mathsf{K}$, i.e. $w\in \mathsf{K}$ s.t. $(v-w)$ is orthogonal to all elements of $\mathsf{K}$;
  3. Such orthogonal projection is unique and $w=\sum_n \alpha_n u_n$ with \begin{equation} \alpha_n= \frac{(v,u_n)}{|u_n|^2}. \label{eq-4.3.7} \end{equation}
  4. $\|v\|^2=\|w\|^2+\|v-w\|^2$.
  5. $v=w \iff \|v\|^2=\|w\|^2$.

Proof. [3]: Obviously $(v-w)$ is orthogonal to $u_n$ iff (\ref{eq-4.3.7}) holds. If (\ref{eq-4.3.7}) holds for all $n$ then $(v-w)$ is orthogonal to all $u_n$ and therefore to all their linear combinations.

[4]-[5]: In particular $(v-w)$ is orthogonal to $w$ and then \begin{equation*} \|v\|^2= \|(v-w)+w\|^2=\|v-w\|^2+ 2\Re \underbracket{(v-w,w)}_{=0}+\|w\|^2. \end{equation*}

[1]-[2]: Consider $w'\in \mathsf{K}$. Then $\|v-w'\|^2=\|v-w\|^2+\|w-w'\|^2$ because $(w-w')\in \mathsf{K}$ and therefore it is orthogonal to $(v-w)$.

Orthogonal systems: approximation. II

Now let $\{u_n\}_{n=1,2,\ldots,}$ be an infinite orthogonal system. Consider its finite subsystem with $n=1,2,\ldots, N$, introduce $\mathsf{K}_N$ for it and consider orthogonal projection $w_N$ of $v$ on $\mathsf{K}_N$. Then \begin{equation*} w_N= \sum_{n=1}^N \alpha_n u_n \end{equation*} where $\alpha_n$ are defined by (\ref{eq-4.3.7}). Then according to [4] of Theorem 1 \begin{equation*} \|v\|^2 =\|v-w_N\|^2+\|w_N\|^2\ge \|w_N\|^2=\sum_{n=1}^N |\alpha_n |^2\|u_n\|^2. \end{equation*} Therefore series in the right-hand expression below converges \begin{equation} \|v\|^2 \ge \sum_{n=1}^\infty |\alpha_n |^2\|u_n\|^2. \label{eq-4.3.8} \end{equation} Really, recall that non-negative series can either converge or diverge to $\infty$.

Then $w_N$ is a Cauchy sequence. Indeed, for $M>N$ \begin{equation*} \|w_N-w_M\|^2= \sum_{n=N+1}^M |\alpha_n |^2\|u_n\|^2\le \varepsilon_N \end{equation*} with $\varepsilon_N\to 0$ as $N\to \infty$ because series in (\ref{eq-4.3.8}) converges.

Now we want to conclude that $w_N$ converges and to do this we must assume that every Cauchy sequence converges.

Definition 3.

  1. $\mathsf{H}$ is complete if every Cauchy sequence converges in $\mathsf{H}$.
  2. Complete pre-Hilbert space is called Hilbert space.

Remark 3. Every pre-Hilbert space could be completed i.e. extended to a complete space. From now on $\mathsf{H}$ is a Hilbert space.

Then we can introduce $\mathsf{K}$--a closed linear hull of $\{u_n\}_{n=1,2,\ldots}$ i.e. the space of \begin{equation} \sum_{n=1}^\infty \alpha_n u_n \label{eq-4.3.9} \end{equation} with $\alpha_n$ satisfying \begin{equation} \sum_{n=1}^\infty |\alpha_n |^2\|u_n\|^2<\infty. \label{eq-4.3.10} \end{equation} (Linear hull would be a space of finite linear combinations).

Let $v\in \mathsf{H}$. We want to find the best approximation of $v$ by elements of $\mathsf{K}$. But then we get immediately

Theorem 2. If $\mathsf{H}$ is a Hilbert space then Theorem 1 holds for infinite systems as well.

Orthogonal systems: completeness

Definition 4. Orthogonal system is complete if equivalent conditions below are satisfied:

  1. Its closed convex hull coincides with $\mathsf{H}$.
  2. If $v\in \mathsf{H}$ is orthogonal to all $u_n$ then $v=0$.

Remark 4.

  1. Don't confuse completeness of spaces and completeness of orthogonal systems.
  2. Complete orthogonal system in $\mathsf{H}$ $=$ orthogonal basis in $\mathsf{H}$.
  3. In the finite-dimensional space orthogonal system is complete iff the number of vectors equals to the dimension. Not so in the infinite-dimensional space.

Our next goal is to establish completeness of some orthogonal systems and therefore to give a positive answer (in the corresponding frameworks) to the question in the end of the previous Section 4.1: can we decompose any function into eigenfunctions? Alternatively: Is the general solution a combination of simple solutions?


$\Leftarrow$  $\Uparrow$  $\Rightarrow$