13.1. Variational theory

$\renewcommand{\Re}{\operatorname{Re}}$ $\renewcommand{\Im}{\operatorname{Im}}$ $\newcommand{\erf}{\operatorname{erf}}$ $\newcommand{\supp}{\operatorname{supp}}$ $\newcommand{\dag}{\dagger}$ $\newcommand{\const}{\mathrm{const}}$ $\newcommand{\arcsinh}{\operatorname{arcsinh}}$

Chapter 13. Eigenvalues and eigenfunctions

13.1. Variational theory


  1. Introduction
  2. Main variational principles
  3. Dependence on parameters
  4. Dependence on domain

Introduction

Consider quadratic forms \begin{gather} Q_0(u)=\|u\|^2=\iiint_\Omega |u|^\,dx; \label{eq-13.1.1}\\ Q(u)= \iiint_\Omega \|\nabla u\|^2\,dx + \iint_\Sigma \alpha|u|^2\,d\sigma \label{eq-13.1.2} \end{gather} where $\Omega $ is a bounded domain with a smooth boundary $\Sigma=\partial \Omega$.

Let us consider a variational problem for $Q(u)$ under restriction $\|u\|=1$ and \begin{equation} u\bigr|_{\Sigma^-}=0 \label{eq-13.1.3} \end{equation} where $\Sigma^-\subset \Sigma$.

Constructing functional $Q_\lambda (u)= Q(u)-\lambda Q(u)$ and taking it variation we arrive to Euler-Lagrange equation \begin{equation} -\Delta u=\lambda u \label{eq-13.1.4} \end{equation} with the boundary conditions \begin{equation} u\bigr|_{\Sigma^-}=0,\qquad (\partial_{\boldsymbol{\nu}}u-\alpha u)\bigr|_{\Sigma^+}=0\qquad \Sigma^+=\Sigma\setminus \Sigma^- \label{eq-13.1.5} \end{equation} where $\boldsymbol{\nu}$ is a unit inner normal.

So we arrived to the eigenvalue problem. We need the following theorems from the Real Analysis:

Theorem 1. Let $\mathsf{H}$ be a Hilbert space. Let $\|v_n\|\le M$; then there exists a subsequence $v_{n_k}$ which converges weakly in $\mathsf{H}$: $(v_{n_k}-v, \phi)\to 0$ for all $\phi\in \mathsf{H}$.

Theorem 2. Let $\Omega$ be a domain of a finite volume with a boundary $\Sigma$. Then any sequence $v_n$ such that $\|\nabla v_n\||\le M$ and $v_n|_\Sigma=0$, contains a subsequence $v_{n_k}$ which converges in $L^2(\Omega)$: $\|v_{n_k}-v\|\to 0$ for some $v\in L^2(\Omega)$, $\nabla v\in L^2(\Omega)$.

Theorem 3. Let $\Omega$ be a bounded domain with a smooth boundary. Then

  1. $v\in L^2(\Omega)$, $\nabla v\in L^2(\Omega)$ imply that $v\in L^2(\Sigma)$;
  2. any sequence $v_n$ such that $\|v_n\|+\|\nabla v_n\||\le M$ contains a subsequence $v_{n_k}$ which converges in $L^2(\Omega)$ and also converges in $L^2(\Sigma)$: $\|v_{n_k}-v\|_{L^2(\Sigma)}\to 0$ to some $v\in L^2(\Omega)\cap L^2(\Sigma)$, $\nabla v\in L^2(\Omega)$;
  3. For any $\epsilon>0$ there exists $C_\epsilon$ such that \begin{equation} \|v\|_{L^2(\Sigma)}\le \epsilon \|\nabla v\|+ C_\epsilon \|v\|. \label{eq-13.1.6} \end{equation}

Remark 1. Actually one need only assume that $\Sigma^+$ is bounded and smooth.

Main variational principles

Theorem 4. Let $\Omega$ be a bounded domain with the smooth boundary. Then

  1. there exists a sequence of eigenvalues $\lambda_1\le \lambda_2\le \lambda_3\le \ldots $, $\lambda_k\to \infty$ and a sequence of corresponding eigenfunctions $u_1,u_2,\ldots $ which forms a basis in $L^2(\Omega)$;
  2. Variational problem of minimizing $Q(u)$ under constrains $\|u\|=1$ and (\ref{eq-13.1.3}) has solution $u_1$ and this minimum is $\lambda_1$.
  3. Variational problem of minimizing $Q(u)$ under constrains $\|u\|=1$, (\ref{eq-13.1.3}) and $(u,u_1)=(u,u_2)=\ldots=(u,u_{n-1})=0$ has solution $u_n$ and this minimum is $\lambda_n$.

Proof. 1. Let us prove first that $u_1$ exists. Let us consider minimizing sequence $v_m$ for $Q(u)$ under constrain $\|v_m\|=1$. Observe that due to Theorem 3(c) \begin{equation} \|\nabla v\|_{L^2(\Sigma)}^2 \le (1-\epsilon )Q(v)+ C_\epsilon \|v\|^2. \label{eq-13.1.7} \end{equation} Then we have $\| v_m\|+\|\nabla v_m\|\le M$ and in virtue of Theorem 1 and either Theorem 2 or Theorem 3 there is a subsequence $v_{m_k}$ converging to some $v$ both in $L^2(\Omega)$ and weakly in $\mathsf{K}=\{v: \|v\|_{\mathsf{K}}<\infty,\, u|_{\Sigma^-}=0\}$ with $\|v\|_{\mathsf{K}}=\bigl(C_0\|v\|^2 +Q(v)\bigr)^{\frac{1}{2}}$. Then $\|v\|=1$ and $Q(v)$ is minimal. We skip the proof.

  1. Similarly we prove existence $\lambda_n,u_n$ by induction. Now we claim that $\lambda_n\to \infty$. Indeed if it is not the case then $\|u_n\|=1$ and $\|\nabla u_n\|\le M$ for some $M$; then in virtue of either Theorem 2 or Theorem 3 there exists a subsequence $u_{n_k}$ converging in $L^2(\Omega)$ which is impossible since $u_{n_k}$ are mutually orthogonal and $\| u_{n_k}-u_{n_l}\|^2=2$ as $k\ne l$.

  2. Further, observe that $u_n$ are orthogonal in $\mathsf{K}$. Let us prove that system $u_n$ is complete in $\mathsf{K}=$. If it is not the case then there exists $u\in \mathsf{K}$ which is orthogonal to all $u_n$. But then $(u_n, u)_{\mathsf{K}}= (-\Delta u_n, u) + C_0 (u_n,u)= (\lambda_n+C_0) (u_n,u)$ and therefore $u$ is orthogonal to all $u_n$ in $L^2(\Omega)$ as well. But then since $\lambda_k\to \infty$ and $Q(u)< \infty$ we conclude that $u$ must appear as a minimizing element in the sequence which is the contradiction.

  3. Finally, $u_n$ is complete in $L^2(\Omega)$ as well. It follows from completeness in $\mathsf{K}$ and the fact that $\mathsf{K}$ is dense in $L^2(\Omega)$. The latter proof is elementary but we skip it as well.

Remark 2. If $\Sigma=\Sigma^-$ we do not need to assume that $\Sigma$ is smooth or even $\Omega$ is bounded; it is sufficient to assume that it has a finite volume (but even this is not necessary!)

Corollary 1. Consider $n$–dimensional subspace $L\subset \mathsf{H}=\{u\in C^1(u), u|_{\Sigma^-}=0\}$. Let $\mu_n (L)= \max_{u\in L:\, \|u\|=1} Q(u)$. Then \begin{equation} \lambda_n =\min_{L\subset \mathsf{H}: \, \dim(L)=n} \mu_n (L)= \min_{L\subset \mathsf{H}: \, \dim(L)=n} \ \max_{u\in L:\, \|u\|=1} Q(u) \label{eq-13.1.8} \end{equation}

Proof. If $L$ is a span of $u_1,\ldots,u_n$ then $\lambda_n(L)=\lambda_n$ so the right-hand expression is not greater than the left=hand one. On the other hand, if $L$ is not a span of $u_1,\ldots,u_n$ then there exists $u\in L,\, u\ne 0$ which is orthogonal to $u_1,\ldots,u_n$ and therefore $Q(u)\ge \lambda_{n+1}\|u\|^2$.

Dependence on parameters

Theorem 5.

  1. Eigenvalues $\lambda_n=\lambda_n(\alpha)$ does not decrease as $\alpha$ increases;
  2. Eigenvalues $\lambda_n=\lambda_n(\Sigma^-)$ does not decrease as $\Sigma^-$ increases;
  3. As $\alpha\to +\infty$ these eigenvalues $\lambda_n=\lambda_n(\alpha)$ tend to $\lambda_{D,n}$ which are eigenvalues of Dirichlet problem (i.e. when $\Sigma^-=\Sigma$).

Proof.

  1. As $\alpha $ increases then $Q(u)$ also increases and then the right-hand expression in (\ref{eq-13.1.6}) cannot decrease.

  2. As we increase $\Sigma^-$ we make space $\mathsf{H}$ only smaller and thus make smaller the choice of $n$-dimensional subspaces $L$ so the right-hand expression in (\ref{eq-13.1.6}) cannot decrease.

  3. Therefore $\lambda_n \le \lambda_{D,n}$. Then as $\alpha\to +0$ expressions $\alpha \iint_{\Sigma}|u_n(\alpha)|^2\,d\sigma$ are bounded by $\lambda_{D,n}$ and therefore $u|_\Sigma\to +0$. Assume that $\lambda_n(\alpha)\le \mu_n$ for all $\alpha$. Then by means of Real Analysis one can prove that there is a sequence $\alpha_k\to +\infty$ such that $u_n(\alpha_k)\to v_n$ and $\|\nabla v_n\|^2 \le \mu_n$, $v_n|_\Sigma=0$ and $\|v_n\|=1$. But then $\mu_n \ge \lambda_n$. Therefore $\lambda_n=\mu_n$ and $\lambda_n(\alpha)\to \lambda_{D,n}$.

Dependence on domain

Consider now only Dirichlet boundary problem.

Theorem 6. Eigenvalues $\lambda_n=\lambda_{D,n}(\Omega)$ does not increase as $\Omega$ increases.

Proof. If $\Sigma^-=\Sigma$ we can identify $\mathsf{H}=\mathsf{H}(\Omega)$ with $\{u:\, \nabla u\in L^2(\mathbb{R}^d), \ u=0 \ \text{on } \ \mathbb{R}^d\setminus\bar{\Omega}\}$ where $\bar{\Omega}$ is a closure of $\Omega$. Then \begin{equation*} Q(u)=\int |\nabla u|^2\, dx,\qquad Q_0(u)=\int | u|^2\, dx \end{equation*} with integrals taken over $\mathbb{R}^d$. Therefore as $\Omega$ increases then $\mathsf{H}(\Omega)$ increases and the choice of $L$ increases, so the right-hand expression in (\ref{eq-13.1.6}) cannot increase.

Remark 3. This statement holds only for Dirichlet eigenvalues.

Theorem 7. Let vary $\Omega$ by moving by $h$ into direction of $\mathbf{\nu}$ (where $h$ is a small function on $\Sigma$. Then for a simple eignevalue $\lambda_n$ \begin{equation} \delta \lambda_{D,n} = \frac{1}{\|u_n \|^2}\int _\Sigma h|\partial_{\boldsymbol{\nu}} u_n|^2 \,d\sigma . \label{eq-13.1.9} \end{equation}

Proof. Consider $u_n+\delta u_n$ matching to a new domain. Since it must be $0$ on the new boundary, modulo smaller terms we conclude that it is $u_n+\delta u_n =-h \partial_{\boldsymbol{\nu}} u_n $ on the old boundary $\Sigma$ and since $u_n=0$ there we conclude that \begin{equation} \delta u_n = -h \partial_{\boldsymbol{\nu}}u_n \qquad \text{on }\ \Sigma. \label{eq-13.1.10} \end{equation} On the other hand $(\Delta +\lambda_n +\delta \lambda_n)(u_n+\delta u_n)=0$ and then modulo smaller terms we conclude that \begin{equation} (\Delta +\lambda_n) \delta u_n = -\delta \lambda_n u_n. \label{eq-13.1.11} \end{equation} Let us multiply by $u_n$ and integrate over $\Omega$; the left-hand expression becomes \begin{multline} \int _\Omega (\Delta +\lambda_n) \delta u_n \cdot u_n \,dx =\\ \int _\Omega \delta u_n \cdot (\Delta +\lambda_n) u_n \,dx -\int _\Sigma \partial_{\boldsymbol{\nu}} \delta u_n \cdot u_n \,d\sigma +\int _\Sigma \delta u_n \cdot \partial_{\boldsymbol{\nu}} u_n \,d\sigma= \\ -\int _\Sigma h|\partial_{\boldsymbol{\nu}} u_n|^2 \,d\sigma \label{eq-13.1.12} \end{multline} where we used that $ (\Delta +\lambda_n) u_n=0$, $u_n|_\Sigma=0$ and (\ref{eq-13.1.10}).

Meanwhile the right-hand expression becomes $-\delta \lambda_n \|u_n\|^2$. Since it i must be equal to the right-hand expression of (\ref{eq-13.1.12}) we arrive to (\ref{eq-13.1.9}).


$\Leftarrow$  $\Uparrow$  $\Rightarrow$