\newcommand{\curl}{\operatorname{curl}} \newcommand{\div}{\operatorname{div}} \newcommand{\grad}{\operatorname{grad}} \newcommand{\R}{\mathbb R } \newcommand{\N}{\mathbb N } \newcommand{\Z}{\mathbb Z } \newcommand{\bfa}{\mathbf a} \newcommand{\bfb}{\mathbf b} \newcommand{\bfc}{\mathbf c} \newcommand{\bfe}{\mathbf e} \newcommand{\bft}{\mathbf t} \newcommand{\bfn}{\mathbf n} \newcommand{\bff}{\mathbf f} \newcommand{\bfF}{\mathbf F} \newcommand{\bfk}{\mathbf k} \newcommand{\bfg}{\mathbf g} \newcommand{\bfG}{\mathbf G} \newcommand{\bfh}{\mathbf h} \newcommand{\bfu}{\mathbf u} \newcommand{\bfv}{\mathbf v} \newcommand{\bfx}{\mathbf x} \newcommand{\bfp}{\mathbf p} \newcommand{\bfq}{\mathbf q} \newcommand{\bfy}{\mathbf y} \newcommand{\ep}{\varepsilon}
Given a function f or a vector field \bfF, we can easily compute
\grad f or \curl \bfF.
Here we ask a sort of inverse
question.
Given a vector field \bfG,
can we determine:
We will discuss gradients first.
Theorem 1. Let U be an open subset of \R^n for n\ge 2, and let \bfG:U\to \R^n be a continuous vector field. Then the following are equivalent:
(i) There exists a function f:U\to \R of class C^1 such that \bfG = \nabla f.
(ii) \int_C \bfG\cdot d\bfx = 0 for any closed piecewise smooth oriented curve C in U.
(iii) \int_{C_1} \bfG\cdot d\bfx = \int_{C_2} \bfG\cdot d\bfx for any two piecewise smooth oriented curves C_1, C_2 in U that both start at the same point \bfp\in U and end at the same point \bfq\in U.
A continuous vector field \bfG:U\to \R^n is said to be conservative if any one of these conditions is satisfied (and hence all three of them are satisfied).
The starting point of the proof is the fact that if \bfG = \nabla f for some f of class C^1, then for any oriented curve C that starts at a point \bfp and ends at a point \bfq, \begin{equation}\label{FFF} \int_C \bfG\cdot d\bfx = \int_C \nabla f\cdot d\bfx = f(\bfq) - f(\bfp). \end{equation} (We saw this in Section 5.1, where we called it the Fundamental Theorem of Line Integrals.
(i) \Rightarrow (ii). Assume that \bfG = \nabla f for some C^1 function f. Then since a closed curve is just a curve that starts and ends at the same point \bfp, it follows from \eqref{FFF} that the integral of \bfG = \nabla f over a closed curve equals f(\bfp)-f(\bfp) = 0.
(ii) \Rightarrow (iii). (sketch) Let C_1, C_2 be two peicewise smooth curves in U that both start at the same point \bfp\in U and end at the same point \bfq\in U.
Given these, we may define a piecewise smooth closed curve C to be the curve formed by
thereby starting and ending at \bfp. Then \int_C \bfG\cdot d\bfx = \int_{C_1} \bfG\cdot d\bfx - \int_{C_2} \bfG\cdot d\bfx and it follows from this that if (2) holds, then \int_{C_1} \bfG\cdot d\bfx - \int_{C_2} \bfG\cdot d\bfx =0 for all C_1, C_2 as above, i.e. (3) holds. Filling in some of the details we have glossed over is an exercise.
(iii) \Rightarrow (i). This is the hardest and most interesting part of the theorem. Suppose that \bfG is a vector field that satisfies condition (3). If this function f exists, it must satisfy \eqref{FFF}. Thus we can try to use formula \eqref{FFF} to reconstruct f (if it exists) from \bfG. That is, we can fix some \bfp\in U, and we can look for f such that \bfG=\nabla f and f(\bfp)=0. According to \eqref{FFF}, such a function f must satisfy \begin{equation} f(\bfq) = \int_{C_{\bfp,\bfq}} \bfG\cdot d\bfx\qquad\mbox{ for any curve }C_{\bfp,\bfq}\mbox{ starting at }\bfp\mbox{ and ending at }\bfq. \label{FFFbis}\end{equation} Thus, given \bfG satisfying condition (3), we use equation \eqref{FFFbis} to define f:U\to \R, and then verify that \nabla f = \bfG. Condition (3) implies that the definition of f makes sense (that is, is independent of the choice of path C_{\bfp, \bfq} connecting \bfp to \bfq). Thus the proof of the theorem is completed by...
Let f be the function defined in \eqref{FFFbis}. We first claim that for any \bfq \in U, any vector \bfv, and any h\in \R such that the line segment from \bfq to \bfq+h\bfv is contained in U, \begin{equation}\label{ffbb} f(\bfq + h\bfv) = f(\bfq) + \int_0^h \bfG(\bfq + t\bfv)\cdot \bfv \, dt. \end{equation} To see that this is true, fix some curve C_{\bfp, \bfq} that starts at \bfp and ends at \bfq. Let \ell_{\bfq, \bfq+h\bfv} be the line segment that starts at \bfq and ends at \bfq+h\bfv, and let C_{\bfp, \bfq+h\bfv} be the curve obtained by
Thus C_{\bfp, \bfq+h\bfv} is a piecewise smooth curve that starts at \bfp and ends at \bfq+h\bfv. It follows from \eqref{FFFbis} that \begin{aligned} f(\bfq+h\bfv) = \int_{C_{\bfp, \bfq+h\bfv}} \bfG\cdot d\bfx &= \int_{C_{\bfp,\bfq}} \bfG\cdot d\bfx + \int_{\ell_{\bfq, \bfq+h\bfv}} \bfG\cdot d\bfx\\ &= f(\bfq) + \int_{\ell_{\bfq, \bfq+h\bfv}} \bfG\cdot d\bfx\\ \end{aligned} If we express the line integral over \ell_{\bfq, \bfq+h\bfv} in terms of the parametrization \bfg(t) = \bfq+t\bfv, 0\le t \le h, this reduces to \eqref{ffbb}.
Next, for any j\in \{1,\ldots, n\}, if (as usual) {\bf e}_j denotes the unit vector in the jth coordinate direction, then it follows from \eqref{ffbb} that \begin{aligned} \frac{\partial f}{\partial x_j}(\bfq) = \lim_{h\to 0}\frac{f(\bfq+h {\bf e}_j) - f(\bfq)}h &= \lim_{h\to 0} \frac 1 h \int_0^h \bfG(\bfq+t {\bf e}_j)\cdot {\bf e}_j dt \\ &= \lim_{h\to 0} \frac 1 h \int_0^h G_j(\bfq+t {\bf e}_j)\, dt . \end{aligned} It is then an exercise to prove (using our assumption that \bfG is continuous) then the limit exists and equals G_j(\bfq). Since j is arbitrary, this proves that \nabla f(\bfq) = \bfG(\bfq). \quad\Box
One drawback of Theorem 1 is that, given a vector field \bfG, it might be hard to check whether it satisfies condition (2) or (3). In order to do this, one would need to evaluate line integrals of \bfG over every possible closed curve (for (2)) or pair of curves that start and end at the same point (for (3)). However, in 3 dimensions, and if \bfG is C^1, there is sometimes a much easier way to check whether it is conservative:
Theorem 2. If \bfG is a conservative vector field of class C^1
on an open set U\subset \R^3, then \curl \bfG = \bf0.
If U is convex, then the converse is true: if \bfG:U\to \R^3 is a C^1 vector field and \curl \bfG = \bf0, then \bfG is conservative.
However, on some non-convex sets, there exist
non-conservative vector fields \bfG that
satisfy \curl \bfG = \bf 0.
(This is a special case of a much more general theorem that we will neither state nor discuss.)
We already know that if \bfG = \grad f, then \curl \bfG = \curl \grad f = \bf 0.
For the converse, assume that \bfG is a C^1 vector field on a convex set U\subset \R^3 such that \curl \bfG = {\bf 0}.
Fix \bfp\in U, and for \bfq\in U, let \ell_{\bfp, \bfq} denote the line segment starting at \bfp and ending at \bfq. Since U is convex, this line segment is entirely contained in U.
We now define f(\bfq) := \int_{\ell_{\bfp, \bfq}} \bfG\cdot d\bfx. This is clearly well-defined, since we have specified exactly which path we follow to get from \bfp to \bfq, for every \bfq.
In order to mimic the proof of Theorem 1, all we need is to be able to check that \begin{equation}\label{ffbbb} f(\bfq + h\bfv) = f(\bfq) + \int_0^h \bfG(\bfq + t\bfv)\cdot \bfv \, dt. \end{equation} holds. If we know this, then every other argument in the earlier proof can be repeated with no change.
To prove it, let's assume that \bfp, \bfq and \bfq+h\bfv are not colinear (in that case one can give a different, easier argument). Then they form the vertices of a triangle. Let's call the triangle S. For any choice of orientation (that is, of the direction \bfn of the unit normal), \iint_{S }(\curl \bfG)\cdot \bfn \, dA = 0 since \curl \bfG = \bf 0 by assumption. Thus Stokes' Theorem implies that \int_{\partial S} \bfG\cdot d\bfx = 0 for any choice of the orientation of \partial S. Note however that \partial S consists of the three line segments connecting \bfp to \bfq to \bfq+h\bfv, and back to \bfp. If it is oriented in that order (\bfp to \bfq to \bfq+h\bfv to \bfp), then \begin{aligned} 0 = \int_{\partial S} \bfG\cdot d\bfx \ &= \ \int_{\ell_{\bfp,\bfq}}\bfG\cdot d\bfx + \int_{\ell_{\bfq,\bfq+h\bfv}}\bfG\cdot d\bfx + \int_{\ell_{\bfq+h\bfv,\bfp}}\bfG\cdot d\bfx \\ &= f(\bfq) + \int_0^h \bfG(\bfq + t\bfv)\cdot \bfv \, dt -f(\bfq+h\bfv). \end{aligned} We obtain \eqref{ffbbb} by rearranging this. \quad \Box
An example of a non-conservative vector field \bfG on a convex set that satisfies \curl \bfG = \bf 0 can be found in the exercises.
Now suppose \bfG is a continuous vector field on an open set U\subset \R^n, and that we somehow know that \bfG is conservative. How can we find f such that \nabla f = \bfG?
One method is simply to carry out a concrete version of the abstract proof, sketched above, that uses formula \eqref{FFFbis} to demonstrate of the existence of f. This works particularly well if U is a rectangle. Then given \bfp = (a,b,c) and \bfq = (x,y,z), we can always join \bfp to \bfq as follows:
Let C_{\bfp, \bfq} be the piecewise linear curve obtained in this way. Then \int_{C_{\bfp, \bfq} }\bfG\cdot d\bfx = \int_a^x G_1(t,b,c)\, dt + \int_b^y G_2(x,t,c)\, dt + \int_c^z G_3(x,y,t)\, dt. So one way to implement formula \eqref{FFFbis} is by: fix (a,b,c), and define \begin{equation}\label{inp} \boxed{ f(x,y,z) := \int_a^x G_1(t,b,c)\, dt + \int_b^y G_2(x,t,c)\, dt + \int_c^z G_3(x,y,t)\, dt.} \end{equation} (Note, we could also change the components in a different order, if we prefer...)
The above theoretical considerations guarantee that if \bfG is conservative, then f defined in this way must satisfy \nabla f = \bfG.
Example 1. Let \bfG:\R^3\to \R^3 be defined by \bfG(x,y,z) = (-y \sin xy , -x \sin xy + z\cos yz, y\cos yz). One can check that \curl \bfG = \bf0. Thus \bfG is conservative. Let us try to find f such that \nabla f = \bfG. We will simply use the above formula, with (a,b,c) = (0,0,0). (We could choose any point, but (0,0,0) is convenient.)
Then \int_a^x G_1(t,b,c)\, dt = \int_0^x 0\, dt = 0, \left. \int_b^y G_2(x,t,c)\, dt = \int_0^y(-x\sin xt\, + 0)dt = \cos xt \right|_{t=0}^{t=y} = \cos xy - 1, and \left. \int_c^zG_3(x,y,t)\, dt = \int_0^z y\cos yt \, dt = \sin yt\right|_{t=0}^z = \sin yz. By adding the contributions from these three terms, we conclude that f(x,y,z) = \cos xy + \sin yz -1 \mbox{ satisfies }\nabla f = \bfG. It is easy to check that this is indeed the case. (The constant -1 appears because, in choosing \bfp = (0,0,0), we implicitly arranged that f(0,0,0) = 0, and -1 is the constant that makes this be the case. If we had chosen a different point \bfp = (a,b,c), then chances are that we would have gotten a different constant.
Remark. Note that in writing down a concrete implementation of \eqref{FFFbis}, we could have chosen any curve from (a,b,c) to (x,y,z), such as (for example) a straight line. If the domain of \bfG is a convex set containing the origin, then we can take (a,b,c) = (0,0,0), and the straight line to (x,y,z)is parametrized by \bfg(t) = (tx, ty, tz) for 0\le t\le 1. This leads to a the general formula f(x,y,z) = \int_0^1 \bfG(tx, ty, tz)\cdot (x,y,z)\, dt. For example, for \bfG(x,y,z) = (-y \sin xy , -x \sin xy + z\cos yz, y\cos yz) as above, this becomes (after some computation) f(x,y,z) = \int_0^1 - 2txy\sin(t^2 xy) + 2tyz\cos(t^2yz)\, dt This is easily evaluated by considering the two halves separately, and making th substitutions u = t^2xy in the first half and u =t^2yz in the second half.
There is a whole theory about vector fields \bfG:U\to \R^3 (for U an open subset of \R^3) with the property that \bfG = \curl \bfF for some other vector field \bfF of class C^1. It is very much parallel to the theory of gradient (= conservative) vector fields. However, we considered it in less detail.
Some main facts are summarized in the following:
Theorem 2.
If \bfG:U\to \R^3 is a vector field of class C^1 and \bfG = \curl \bfF for some vector field \bfF:U\to \R^3 of class C^2, then \operatorname{div}\bfG = 0.
Suppose \bfG is a C^1 vector field in an open set U\subset \R^3 such that \operatorname{div}\bfG = 0.
a.. If U is convex, then there exists a vector field \bfF such that
\curl \bfF = \bfG.
b. However, if U is not convex, it may be the case that no such vector field exists.
About the proof:
1. We already know that \div\curl \bfF = 0 for all \bfF.
2b. Consider the vector field \bfG(x,y,z) = (\frac {x}{r^3},\frac {y}{r^3}, \frac {z}{r^3}) ,\qquad\mbox{ where }\ \quad r = \sqrt {x^2+y^2+z^2}. This is an example showing that on a nonconvex set U (note that the domain of \bfG is U = \R^3\setminus\{ {\bf 0}\}), there can exist vector fields with zero divergence that are not curls. Imdeed, you can check that \div \bfG = 0 for this \bfG. You can also check that if S := \{(x,y,z)\in \R^3 : x^2+y^2+z^2=1\}, oriented with the unit normal pointing outward, then \iint_S \bfG\cdot \bfn \, dA = 4\pi. whereas if \bfG=\curl \bfF in U, then we would have \iint_S \bfG\cdot \bfn \, dA =\iint_S \curl \bfF\cdot \bfn \, dA = 0. (See Section 5.6.)
2a. The way the proof works is illustrates below for a concrete example. After that we will discus the abstract proof.
Example 2. Let \bfG(x,y,z) = (xe^{-x^2z^2} -6x, 5y+2z, z - ze^{-x^2z^2}) . One can check that \operatorname{div}\bfG=0, and the domain of \bfG is all of \R^3, which is convex. So it must be possible to write \bfG as the curl of some vector field \bfF.
It turns out (see Folland for a discussion) that in this situation, it is always possible to find \bfF such that one of its components is zero everywhere. In this example, it turns out to be easiest to lok for \bfF of the form \bfF = (F_1, 0, F_3). (This can be discovered by trial and error.) Then the equation that we are trying to solve, \curl \bfF = \bfG, can be written as three equations: \begin{aligned} \partial_2 F_3 &= G_1 =xe^{-x^2z^2} -6x \\ \partial_3 F_1-\partial_1F_3 &= G_2 = 5y+2z\\ -\partial_2 F_1 &=G_3 = z - ze^{-x^2z^2}. \end{aligned} First we consider the first and third equations. For every fixed x,z, we may take the antiderivative with respect to the y variable to obtain \begin{aligned} F_3(x,y,z) &= \int(xe^{-x^2z^2} -6x)dy = y(xe^{-x^2z^2} -6x) + \phi_3(x,z), \\ F_1(x,y,z) &= -\int(z - ze^{-x^2z^2})dy = y(ze^{-x^2z^2}-z) + \phi_1(x,z), \end{aligned} where \phi_1(x,z) and \phi_3(x,z) are constants of integration, which are written in that way because they may depend on x and z. Substituting these into the second equation and simplifying, we obtain \begin{multline} 5y+2z = G_2 = \partial_3 F_1-\partial_1F_3 = \partial_3 \phi_1-\partial_1\phi_3 \\ + y(e^{-x^2z^2}-1) -2y x^2z^2 e^{-x^2z^2} - \left[ y(e^{-x^2z^2}- 6) - 2yx^2z^2 e^{-x^2z^2} \right]. \nonumber \end{multline} After a lot of cancellation, this reduces to \partial_3 \phi_1-\partial_1\phi_3 = 2z and we can see by inspection that ths is solved for example by \phi_1(x,z) = 0, \phi_3(x,z) = -2xz. We conclude that \curl \bfF = \bfG for \bfF (x,y,z) = (yze^{-x^2z^2}-yz , \ 0 , \ yxe^{-x^2z^2} -6xy -2xz ).
We will just follow the same procedure as in Example 2. We will also cheat a little cheat by proving the theorem not for an arbitrary convex set, but instead under the assumption that U is a rectangle, e.g. (a_1,b_1)\times (a_2,b_2)\times (a_3,b_3).
We assume that \bfG is C^1 in U and that \div \bfG = 0. We want to find \bfF solving the system of equations \begin{aligned} \partial_2F_3 - \partial_3F_2&= G_1 \\ \partial_3F_1 - \partial_1F_3&= G_2 \\ \partial_1F_2 - \partial_2F_1&= G_3 . \end{aligned} We will look for \bfF such that F_3 = 0. Then the first two equations reduce to \begin{aligned} - \partial_3F_2&= G_1 \\ \partial_3F_1 &= G_2 . \end{aligned} We fix some a\in (a_3, b_3) and integrate both sides with respect to the third variable (calling it t instead of z for purposes of integration) getting constants of integration that depend on x and y: \begin{aligned} F_2(x,y,z)&= -\int_a^z G_1(x,y,t)\, dt +\phi(x,y)\\ F_1(x,y,z)&= \int_a^z G_2(x,y,t)\,dt +\psi(x,y). \end{aligned} We now substitute these into the third equation, differentiate under the integral sign, then simplify using the fact that \div \bfG = 0: \begin{aligned} G_3(x,y,z) &= \partial_1F_2 - \partial_2F_1 \\ &= \partial_1 \phi - \partial_2\psi - \int_a^z (\partial_1 G_1 +\partial_2 G_2)(x,y,t)\, dt\\ &= \partial_1 \phi - \partial_2\psi + \int_a^z \partial_3 G_3(x,y,t) \ dt\\ &= \partial_1 \phi - \partial_2\psi + G_3(x,y,z) - G_3(x,y,a). \end{aligned} Thus the equation reduces to \partial_1 \phi(x,y) - \partial_2\psi(x,y) = G_3(x,y,a). and this can be solves for example by fixing a'\in (a_1,b_1) and setting \psi(x,y)=0, \qquad \phi(x,y) = \int_{a'}^x G_3(t,y,a)\, dt\ .
Example 3. This is optional but interesting.
Assume that R\subset \R^2 is a regular region with piecewise smooth boundary, contained in [-a,a]\times [-a,a] for some positive number a<\pi, and let S := \{ (x,y, \phi(x,y) : (x,y)\in R\} \quad\mbox{ for }\phi(x,y) = \ln(\frac {\cos x}{\cos y}).
Below, an image of a portion of the graph of \phi. For a suitable choice of the region R, this is what S would look like.
Also define \begin{aligned} \bfG(x,y,z) = \frac{(-\partial_x \phi, -\partial_y \phi , 1)}{\sqrt{1+|\nabla \phi|^2}}& = \frac{ (\tan x ,- \tan y, 1)}{ \sqrt{1+\tan^2 x +\tan^2 y}} %\\ %& = (\frac{\sin x}{\sqrt{1+\tan^2 y \cos^2x}} , \frac{-\sin y}{ %\sqrt{1+\tan^2 x\cos^2y}}, %\frac 1{\sqrt{1+\tan^2 x+\tan^2 y}}) \end{aligned} It is a fact that you can check, if you are unusually fond of differentiation, that \nabla \cdot \bfG = 0, and hence that there exists some \bfF such that \bfG = \nabla\times \bfF. It follows by Stokes Theorem that \iint_{S}\bfG\cdot\bfn \, dA = \iint_{S'}\bfG\cdot\bfn \, dA for any other surface S' with the same boundary as S.
It is easy to see that \bfG \cdot \bfn = 1 on S
(in fact \bfG exactly equals \bfn on S).
As a result for any surface S' as above,
\mbox{area}(S)
\ = \
\iint_S 1\, dA
\ = \
\iint_S \bfG\cdot \bfn\, dA
\ = \
\iint_{S'} \bfG\cdot \bfn\, dA .
Also, it is pretty clear that |\bfG|=1 everywhere, and hence
that |\bfG \cdot \bfn| \le |\bfG|\, |\bfn| = 1 everywhere on S'.
Therefore
\iint_{S'} \bfG\cdot \bfn\, dA \
\le \ \iint_{S'} |\bfG\cdot \bfn|\, dA
\ \le \ \iint_{S'}1 \, dA = \mbox{area}(S').
Putting these together, we conclude that
\mbox{area}(S) \le \mbox{area}(S')\mbox{ for any surface }S'
\mbox{ such that }\partial S = \partial S'.
This shows that the particular surface S pictured above is an example
of what is called a minimal surface: a surface that has the
smallest possible area, among all surfaces with a given boundary.
Given a vector field \bfG:\R^3\to \R^3, determine whether whether there exists any functions f such that \nabla f = \bfG, and if so, find one. For example:
\bfG(x,y,z) = ( \frac {e^z} {1+y^2}, - \frac {2xy e^z}{(1+y^2)^2}, \frac {xe^z}{1+y^2} +\sin z) .
\bfG(x,y,z) = (yz, xz,xy).
\bfG(x,y,z) = (x\sin y \cos z , \frac 12 x^2 \cos y \cos z , -\frac 12 x^2 \sin y \sin z).
\bfG(x,y,z) = (\frac 12, \frac{y^2-xz}{y^2z} , \frac {-y}{z^2}) .
\bfG(x,y,z) = (\frac{\cos z}x , \frac {\cos z}y , -\sin z \ln(xyz) + \frac {\cos z}{z} ), for x,y,z all positive.
\bfG is a vector field of the form \bfG(x,y,z) = (f(x), g(y), h(z)), where f,g,h are all continuous functions of a single variable.
Given a vector field \bfG:\R^3\to \R^3, determine whether there exist any vector fields \bfF such that \nabla \times \bfF = \bfG, and if so find one. For example:
\bfG(x,y,z) = ( x,y , z ).
\bfG(x,y,z) = ( -y,x, 0).
\bfG(x,y,z) = ( y, \frac 1 z \cos yz, \frac 1 y \cos yx).
\bfG(x,y,z) = ( yz, xz, xy).
\bfG is a vector field of the form \bfG(x,y,z) = (f(y,z), g(x,z), h(x,y)), where f,g,h are all continuous functions of two variables.
For U := \{(x,y,z)\in \R^3 : x^2+y^2>0\}, define \bfG:U\to \R^3 by \bfG(x,y,z) = (\frac {-y}{x^2+y^2}, \frac x{x^2+y^2}, 0).
Prove that if g is a continuous function defined on an open set containing a point \bfq, then for unit vector \bfe, \lim_{h\to 0} \frac 1 h \int_0^h g(\bfq+t {\bf e})\, dt = g(\bfq) (This point comes up in the proofs of Theorems 1 and 2).
Assume that \bfG is a C^2 vector field that is both a gradient and a curl.
Show that if f is a function such that \nabla f = \bfG, then f is harmonic (that is, \nabla^2 f = 0, where we recall that \nabla^2 f = \sum_{i=1}^3 \frac{\partial^2 f}{\partial x_i^2}.)
Conclude that each component G_i of \bfG is also harmonic.