10.2. Functionals, extremums and variations (continued)

$\renewcommand{\Re}{\operatorname{Re}}$ $\renewcommand{\Im}{\operatorname{Im}}$ $\newcommand{\erf}{\operatorname{erf}}$ $\newcommand{\dag}{\dagger}$ $\newcommand{\const}{\mathrm{const}}$ $\newcommand{\arcsinh}{\operatorname{arcsinh}}$ $\newcommand{\bfnu}{\boldsymbol{\nu}}$

10.2. Functionals, extremums and variations (continued)

  1. Boundary conditions
  2. Vector and complex valued functions
  3. Extremals under constrains. I
  4. Extremals under constrains. II
  5. Higher order functionals

Boundary conditions

Let us consider the same functional (10.1.4) but now instead of constrain (10.1.7) we put less restrictive \begin{equation} \delta q (t_0)=0. \label{eq-10.2.1} \end{equation} Then Euler-Lagrange equation (10.1.9) must still be fulfilled but it does not guarantee that $\delta S=0$ for all admissible variations as according to (10.1.6) \begin{equation} \delta S= \frac{\partial L}{\partial \dot{q}}\delta q\Bigr|_{t=t_1}, \label{eq-10.2.2} \end{equation} where we used Euler-Lagrange equation where we used Euler-Lagrange equation to eliminate integral term and (\ref{eq-10.2.1}) to eliminate "left-end-term".

Now we need to have it be $0$ as well an since $\delta q(t_1)$ is arbitrary we need \begin{equation} \frac{\partial L}{\partial \dot{q}}\Bigr|_{t=t_1}=0. \label{eq-10.2.3} \end{equation} However under assumption (\ref{eq-10.2.1}) it makes sense to consider more general functional than (10.1.4): \begin{equation} S[q]= \int_I L(q,\dot{q},t)\,dt + M_1(q(t_1)), \label{eq-10.2.4} \end{equation} which includes a boundary term. One can see easily that variation of the boundary term is $ \frac{\partial M_1}{\partial q}\delta q$ which should be added to the right-hand expression of (\ref{eq-10.2.2}) which becomes \begin{equation} \delta S= \Bigl(\frac{\partial L}{\partial \dot{q}} + \frac{\partial M_1}{\partial q}\Bigr)\delta q(t_1); \label{eq-10.2.5} \end{equation} then (\ref{eq-10.2.3}) becomes \begin{equation} \Bigl( \frac{\partial L}{\partial \dot{q}}+ \frac{\partial M_1}{\partial q} \Bigr)\Bigr|_{t=t_1}=0. \label{eq-10.2.6} \end{equation} Then we arrive to the following generalization of Theorem 10.1.1:

Theorem 1. Let us consider a functional (\ref{eq-10.2.4}) and consider as admissible all $\delta q$ satisfying (\ref{eq-10.2.1}). Then $q$ is a stationary point of $S$ if and only if it satisfies Euler-Lagrange equation (10.1.9) and a boundary condition (\ref{eq-10.2.6}).

Example 1. Newton's minimal resistance problem Find the minimal resistance solid of revolution of the given diameter $H$ and length $a$. Up to a numerical coefficient \begin{gather*} S[y]:= \int_0^H \frac{y\,dy} {1+(\frac{dx}{dy})^2}=-\int_0^{a} \frac{yy'^3\,dx }{1+y'^2} + \frac{1}{2}y(a)^2 \end{gather*} where the first term is the air resistance of the "side surface" obtained by revolution about $x$ of the curve $\{y=y(x), \ 0< x< a\}$ and the second term is a resistance of the front disk of the radius $y(a)$.

F10.2.1

Solution. $L(y,y'):= \dfrac{yy'^3}{1+y'^2}$ does not depend on $x$ and using (10.1.12) \begin{multline*} \frac{\partial L}{\partial y'} y' - L =\frac{2yy'^3}{(1+y'^2)^2}=-2C \implies y= C\frac{(1+p^2)^2}{p^3},\qquad p:= -y',\\ \implies dy = C\Bigl(-\frac{3}{p^4}-\frac{2}{p^2}+ 1\Bigr)\,dp\implies dx =-\frac{dy}{p}= C\Bigl(\frac{3}{p^5}+\frac{2}{p^3}- \frac{1}{p}\Bigr)\\ \implies x = -C\Bigl(\frac{3}{4p^4} +\frac{1}{p^2}+ \ln(p)\Bigr)+C_2 \end{multline*} which defines the curve parametrically.

Meanwhile $y(0)=H$ and boundary condition (\ref{eq-10.2.6}) is equivalent to $y'(a)=-1$ (so $p$ ranges from some $p_0\in (0,1)$ to $p_1=1$).

Remark 1.

  1. If instead of (\ref{eq-10.2.1}) we consider restriction $\delta q(t_1)=0$ then (\ref{eq-10.2.6}) is replaced by \begin{equation} \Bigl( \frac{\partial L}{\partial \dot{q}}- \frac{\partial M_0}{\partial q} \Bigr)\Bigr|_{t=t_10}=0. \label{eq-10.2.7} \end{equation}
  2. If we have no restrictions to $\delta q(t_0)$, $\delta q(t_1)$, then we have both (\ref{eq-10.2.6}) and (\ref{eq-10.2.7}).
  3. Thus, no matter what, we have on each endpoint one condition: either $q(t_0)=q^0$ or (\ref{eq-10.2.7}) at $t=t_0$, and either $q(t_1)=q^0$ or (\ref{eq-10.2.6}) at $t=t_1$.

Vector and complex valued functions

If our function is vector–valued: $\mathbf{q}=(q_1,\ldots,q_m)$, then we can consider variations with respect to different components and derive corresponding Euler-Lagrange equations \begin{equation} \frac{\partial L}{\partial q_k} - \frac{d}{dt} \frac{\partial L}{\partial \dot{q}_k} =0 \qquad k=1,\ldots,m. \label{eq-10.2.8} \end{equation} We also get $m$ boundary conditions on each endpoint of the interval.

Remark 2. (Compare with Remark 10.1.4)

  1. If $L_{q_k}=0$ then the corresponding equation integrates to \begin{equation} \frac{\partial L}{\partial \dot{q}}=C_k. \label{eq-10.2.9} \end{equation}
  2. The following equality holds (Beltrami identity): \begin{equation} \frac{d}{dt} \left(\sum_{1\le k\le m} \frac{\partial L}{\partial \dot{q}_k}\dot{q}_k-L\right)=-\frac{\partial L}{\partial t}. \label{eq-10.2.10} \end{equation} Indeed, the left hand expression equals to \begin{equation*} \sum_{1\le k\le m} \Bigl[ \Bigl(\frac{d}{dt}\frac{\partial L}{\partial \dot{q}_k}\Bigr)\dot{q}_k + \frac{\partial L}{\partial \dot{q}_k}\ddot{q}_k \Bigr]- \sum_{1\le k\le m}\Bigl[\frac{\partial L}{\partial q_k} \dot{q}_k + \frac{\partial L}{\partial \dot{q}_k} \ddot{q}_k\Bigr]- \frac{\partial L}{\partial t}, \end{equation*} where to differentiate $L$ we used a chain rule. Some terms cancel immediately, some due to Euler-Lagrange equations, and we are left with $-\frac{\partial L}{\partial t}$.
  3. In particular, if $\frac{\partial L}{\partial t}=0$ ($L$ does not depend explicitely on $t$), then \begin{equation} \frac{d}{dt} \left(\sum_{1\le k\le m}\frac{\partial L}{\partial \dot{q}_k}\dot{q}_k-L\right)=0\implies \sum_{1\le k\le m}\frac{\partial L}{\partial \dot{q}_k}\dot{q}_k-L=C. \label{eq-10.2.11} \end{equation}

Remark 2. Complex-valued functions $q$ could be considered as vector-valued functions $\mathbf{q}=\left(\begin{smallmatrix}\Re q\\ \Im q\end{smallmatrix}\right)$. Similarly we can treat functions which are vector–valued with complex components: we just double $m$.

Extremals under constrains. I

We can consider extremals of functionals under constrains \begin{equation} \Psi_1[q]=\Psi_2[q]=\ldots=\Psi_s[q]=0 \label{eq-10.2.12} \end{equation} where $\Psi_j$ are other functionals. This is done in the same way as for extremums of functions of several variables: instead of $S[q]$ we consider Lagrange functional \begin{equation} S^*[q]:= S[q]-\lambda_1\Psi_1[q]-\lambda_2\Psi_2[q]-\ldots-\lambda_s \Psi_s[q] \label{eq-10.2.13} \end{equation} and look for it extremals without constrains; factors $\lambda_1,\ldots,\lambda_s$ are Lagrange multipliers.

Remark 3. This works provided $\delta \Psi_1,\ldots,\delta\Psi_s$ are linearly independent which means that if $\alpha_1\delta\Psi_1[q]+\alpha_2\delta\Psi_2[q]+\ldots +\alpha_s \delta\Psi_s[q]=0$ for all admissible $\delta q$, then $\alpha_1=\ldots=\alpha_s=0$.

Example 2. Find catenary the curve that an idealized chain (infinitely flexible but unstretchable) hanging between two anchors takes.

Solution. We need to minimize potential energy \begin{align*} E[y]&=g \int_0^a y\sqrt{1+y'^2}\,dx \end{align*} under constrain \begin{align*} \ell [y]&= \int_0^a \sqrt{1+y'^2}\,dx =l. \end{align*}

F10.2.2
Then $L^*(y,y'):= (y-\lambda) \sqrt{1+y'^2}$ does not depend on $x$ and using (10.1.12)

\begin{multline*} \frac{\partial L^*}{\partial y'} y' - L^* =\frac {y-\lambda}{\sqrt{1+y'^2}}=C \implies ay'= \sqrt{(y-\lambda)^2 -a^2 }\\ \implies dx= \frac{a\,dy}{\sqrt{(y-\lambda)^2 -a^2 }} \implies y-\lambda = a \cosh ((x-c)/a) \end{multline*} with constants $a,c,\lambda$.

Extremals under constrains. II

However constrains could be different from those described in the previous subsection. They can be not in the form of functionals but in the form of functions. In this case we have continuum conditions and Lagrange multipliers became functions as well. Let us consider this on an example.

Example 2. Consider our standard functional under constrains \begin{equation} F_j(\mathbf{q}(t),t)=0\qquad j=1,\ldots, s. \label{eq-10.2.14} \end{equation} Then we consider functional \begin{equation*} S^* [\mathbf{q}]=\int_I \bigl( L(\mathbf{q},\dot{\mathbf{q}},t)- \sum_{1\le j\le s} \lambda_j (t)F_j(\mathbf{q}(t),t) \bigr) \,dt. \end{equation*} Then Euler-Lagrange equations are \begin{equation} \frac{\partial L}{\partial q_k}- \frac{d}{dt}\frac{\partial L}{\partial \dot{q}_k} - \sum_{1\le j \le s} \lambda_j(t) \frac{\partial F_j}{\partial q_k}=0 \label{eq-10.2.15} \end{equation} with unknown functions $\lambda_j(t)$.

Remark 4. Obviously we can combine different types of constrains and different types of Lagrange multipliers.

Higher order functionals

We could include into functional higher-order derivatives. Let us consider functional (for simplicity we consider scalar functions and second-order ) \begin{equation} S[q]= \int_I L(q,\dot{q}, \ddot{q},t)\,dt. \label{eq-10.2.16} \end{equation} Then the same arguments as before (but term $\frac{\partial L}{\partial\ddot{q}}\delta \ddot{q}$ should be integrated by parts twice) lead us to Euler-Lagrange equation \begin{equation} \frac{\delta S}{\delta q}:= \frac{\partial L}{\partial q} - \frac{d}{dt}\left(\frac{\partial L}{\partial \dot{q}}\right) + \frac{d^2}{dt^2} \left(\frac{\partial L}{\partial \ddot{q}}\right)=0 \label{eq-10.2.17} \end{equation} which is the fourth order ODE.

But what about boundary conditions? We need to have now two of them at each endpoint. Consider, f.e. $t=t_1$. Then we have there in $\delta S$ terms \begin{equation} \Bigl[ \Bigl(-\frac{d}{dt}\frac{\partial L}{\partial \ddot{q}}+ \frac{\partial L}{\partial \dot{q}} + \frac{\partial M_1}{\partial q}\Bigr) \delta q+ \Bigl(\frac{\partial L}{\partial \ddot{q}} + \frac{\partial M_1}{\partial \dot{q}}\Bigr) \delta \dot{q} \Bigr] \Bigr|_{t=t_1}, \label{eq-10.2.18} \end{equation} if functional includes a term $M_1 (q(t_1), \dot{q}(t_1))$.

Obviously we have them if we are looking for solution satisfying \begin{equation} q(t_1)=a, \qquad \dot{q}(t_1)=b. \label{eq-10.2.19} \end{equation} Otherwise we need to consider $\delta S$. If we have only the first restriction, then $\delta q(t_1)=0$, but $\delta \dot{q}(t_1)$ is arbitrary and we get the second boundary condition \begin{equation} \Bigl(\frac{\partial L}{\partial \ddot{q}} + \frac{\partial M_1}{\partial \dot{q}}\Bigr)\Bigr|_{t=t_1}=0. \label{eq-10.2.20} \end{equation} If there are no restriction, $\delta q(t_1)$ is also arbitrary and in addition to (\ref{eq-10.2.20}) we get \begin{equation} \Bigl(-\frac{d}{dt}\frac{\partial L}{\partial \ddot{q}}+ \frac{\partial L}{\partial \dot{q}} + \frac{\partial M_1}{\partial q}\Bigr) \Bigr|_{t=t_1}=0. \label{eq-10.2.21} \end{equation} We can also consider more general case of one restriction leading to $\bigl[\alpha \delta q+\beta\delta \dot{q}\bigr]{(t_1)}=0$ with $(\alpha,\beta)\ne 0$ and recover the second condition.


$\Leftarrow$  $\Uparrow$  $\downarrow$  $\Rightarrow$