4.5 Further aspects of integration

$\newcommand{\R}{\mathbb R }$ $\newcommand{\N}{\mathbb N }$ $\newcommand{\Z}{\mathbb Z }$ $\newcommand{\bfa}{\mathbf a}$ $\newcommand{\bfb}{\mathbf b}$ $\newcommand{\bfc}{\mathbf c}$ $\newcommand{\bfe}{\mathbf e}$ $\newcommand{\bft}{\mathbf t}$ $\newcommand{\bff}{\mathbf f}$ $\newcommand{\bfF}{\mathbf F}$ $\newcommand{\bfk}{\mathbf k}$ $\newcommand{\bfg}{\mathbf g}$ $\newcommand{\bfG}{\mathbf G}$ $\newcommand{\bfh}{\mathbf h}$ $\newcommand{\bfu}{\mathbf u}$ $\newcommand{\bfv}{\mathbf v}$ $\newcommand{\bfx}{\mathbf x}$ $\newcommand{\bfp}{\mathbf p}$ $\newcommand{\bfy}{\mathbf y}$ $\newcommand{\ep}{\varepsilon}$

4.5 Further aspects of integration

  1. Exchanging differentiation and integration
  2. Improper integrals
  3. Problems

Exchanging differentiation and integration

Often in mathematics and its applications, one encounters functions $F(\bfx):T\to \R$ whose definition looks like \begin{equation}\label{formofF} F(\bfx) = \idotsint_S f(\bfx,\bfy) d^n\bfy. \end{equation} Here $S\subset \R^n$ and $T\subset \R^m$ for some $n,m$, and $f:T\times S\to \R$ is a function, all satisfying suitable hypotheses. (more on this below).

In particular, functions of the above type arise often when one is trying to solve differential equations connected to areas such as physics, statistics, mathematical finance, and more.

Here we ask the question: under what hypotheses on $f$ is $F$ well-defined, continuous, differentiable, etc.... ?

We will focus on differentiability, since that is the issue that arises most often in applications.

First, an example:

Example 1. Define $f(x,y) = (x^2+4y)^5$, and let $$ F(x) := \int_0^1 f(x,y) \, dy = \int_0^1 (x^2+4y)^5\, dy. $$ It is straightforward to check (via the substitution $u=x^2+4y$, and paying attention to the limits of integration) that $$ F(x) = \frac 1{24}( (x^2+4)^6 - x^{12}),\qquad\mbox{ and thus } F'(x) = \frac 12( (x^2+4)^5x - x^{11}). $$ On the other hand, $$ \int_0^1 \frac {\partial f}{\partial x}(x,y) \, dy = \int_0^1 5(x^2+4y)^4 2x\, dy = \cdots = \frac 12( (x^2+4)^5x - x^{11}). $$ Thus in this example, $$ \frac {\partial }{\partial x} \int_0^1 f (x,y) \, dy = \int_0^1 \frac {\partial f}{\partial x}(x,y) \, dy . $$ Is this a coincidence, or is it more generally true? An answer is provided by the next theorem.

Theorem 1. Assume that $S\subset \R^n$ is compact and measurable, and that $T\subset \R^m$ is open. Assume that $f$ is a function $T\times S\to \R$, and define $F:T\to \R$ as in \eqref{formofF}.
$\quad$ a. If $f$ is continuous in $T\times S$, then $F$ is continuous in $T$.
$\quad$ b. If $ \frac{\partial f}{\partial x_j}$ is continuous in $T\times S$ for every $j \in \{1,\ldots, m\}$, then $F$ is $C^1$ in $T$ and \begin{equation}\label{dis0} \frac{\partial}{\partial x_j} F(\bfx) = \idotsint_S \frac{\partial}{\partial x_j} f(\bfx, \bfy) d^n\bfy\qquad\mbox{ for every }j. \end{equation}

Remark 1. Another way of writing conclusion (b) of Theorem 1 is: $$ \frac{\partial}{\partial x_j} \idotsint_S f(\bfx, \bfy) d^n\bfy = \idotsint_S \frac{\partial}{\partial x_j} f(\bfx, \bfy) d^n\bfy. $$ So the issue is: when can one exchange the operations of integration and differentiation, or as is sometimes said, when can one differentiate under the integral sign?
Note also, if we write the partial derivatives in the above formula as limits of difference quotients, we can see that the conclusion of the theorem becomes $$ \lim_{h\to 0}\idotsint_S \frac{f(\bfx+h \bfe_j, \bfy) - f(\bfx, \bfy)}h d^n\bfy =\idotsint_S \lim_{h\to 0} \frac{f(\bfx+h \bfe_j, \bfy) - f(\bfx, \bfy)}h d^n\bfy , $$ where $\bfe_j$ is the standard unit vector in the $j$th coordinate direction.
From this we see that underlying issue is exchanging limits and integration. The heart of the proof addresses this point.

Proof.

We will omit the proof of part (a). Interested students can do it as an exercise. It involves the same kinds of arguments (connected to uniform continuity) that appear toward the end of the proof of part (b).

Turning to part (b), note that part (a) and the continuity of $\partial f/\partial x_j$ imply that the right-hand side of \eqref{dis0} is a continuous function of $\bfx\in T$. So if we can prove that the partial derivates of $F$ exist and are given by formula \eqref{dis0}, it will follow that all partial derivatives are continuous, i.e. that $F$ is $C^1$

We will prove formula \eqref{dis0} assuming that $S\subset \R^2$. This is purely to simplify the notation, allowing us to write $\iint_S$ instead of the awful-looking $\idotsint_S$. The idea of the proof is identical in all dimensions, including of course $n=1$.

First, note that \begin{align} \frac{\partial}{\partial x_j} F(\bfx) &= \lim_{h\to 0} \frac 1h [F(\bfx+h\bfe_j) - F(\bfx)] \nonumber \\ &= \lim_{h\to 0} \frac 1h\left[ \iint_S f(\bfx+h\bfe_j, \bfy)d^2\bfy - \iint_S f(\bfx, \bfy)d^2\bfy \right] \nonumber \\ &= \lim_{h\to 0} \iint_S \frac 1h \left[f(\bfx+h\bfe_j, \bfy) - f(\bfx, \bfy)\right] d^2\bfy \label{dis1} \end{align} (if the limit exists, which we will prove). We now define $$ R_1(\bfx,\bfy,h) := \frac 1h \left[f(\bfx+h\bfe_j, \bfy) - f(\bfx, \bfy)\right] - \frac{\partial f}{\partial x_j}(\bfx, \bfy). $$ Substituting this into \eqref{dis1}, we find that $$ \frac \partial {\partial x_j} F(\bfx) = \iint_S \frac{\partial f}{\partial x_j}(\bfx, \bfy) d^2\bfy + \lim_{h\to 0}\iint_S R_1(\bfx, \bfy, h)d^2\bfy. $$ So we only need to prove that \begin{equation}\label{ontp} \lim_{h\to 0}\iint_S R_1(\bfx, \bfy, h)d^2\bfy = 0. \end{equation}

The key point is that for any $\ep>0$, there exists $\delta>0$, possibly depending on $x$ and $j$, but not on $\bfy$ or $h$, such that \begin{equation}\label{R1est} |R_1(\bfx,\bfy,h)| < \ep \mbox{ whenever }|h|<\delta. \end{equation} We will prove this below. First we show how to use it to complete the proof of the theorem. Indeed, it easily follows from \eqref{R1est} that if $|h|<\delta$, then $$ \left| \iint_S R_1(\bfx, \bfy, h) d^2\bfy\right| \le \iint_S \left| R_1(\bfx, \bfy, h)\right|d^2\bfy \le \iint_S \ep \, d^2\bfy \le \ep \mbox{Area}(S). $$ Since $\ep$ is arbitrary, this proves \eqref{ontp}.

To establish \eqref{R1est} and complete the proof of the theorem, note that by the Mean Value Theorem, for every $(\bfx,\bfy, h)$ there exists $\theta\in (0,1)$ such that $$ R_1(\bfx,\bfy,h) = \frac{\partial f}{\partial x_j}(\bfx+\theta h\bfe_j, \bfy)-\frac{\partial f}{\partial x_j}(\bfx, \bfy). $$ Now fix any $r>0$ such that $\{ \bfx'\in \R^m : |\bfx - \bfx'|\le r\} \subset T$. This is possible because $T$ is open. Having fixed $r$, define $$ K_r := \{ (\bfx', \bfy) : |\bfx' - \bfx|\le r, \bfy\in S \} \subset T\times S. $$ Then the restriction of $\frac{\partial f}{\partial x_j}$ to $K_r$ is a continuous function on a compact set, and hence uniformly continuous. It follows that for any $\ep>0$ there exists $\delta_1>0$ such that $$ | \frac{\partial f}{\partial x_j}(\bfx',\bfy')-\frac{\partial f}{\partial x_j}(\bfx, \bfy)| < \ep \mbox{ whenever }(\bfx', \bfy')\in K_r \mbox{ and }|(\bfx', \bfy') - (\bfx, \bfy)|<\delta_1. $$ In particular, choosing $\bfy' = \bfy$ and $\bfx' = \bfx+\theta h \bfe_j$, we conclude that \eqref{R1est} is satisfied whenever $|h| < \min\{\delta_1, r\}$. $\quad\Box$

Example 2.

Sometimes one can differentiate under the integral sign in a way that allows one to compute by hand integrals that would otherwise be impossible to evaluate.

This trick was of more interest in the pre-computer days, when evaluating integrals was sometimes a matter of genuine importance. Now it is more of a curiosity. But here is an example:

Compute $F(x) := \int_0^1 \frac{y^x-1}{\ln y}\,dy $.

Solution. First we remember $$ \frac{\partial}{\partial x}y^x = y^x \ln y,$$ as follows by writing $y^x = (e^{\ln y})^x = e^{x\ln y}$. We will use this below.

Next, we remark that
$$ \frac{y^x-1}{\ln y}\to 0\ \ \mbox{ as }y\to 0^+, \quad \mbox{ and } \ \ \frac{y^x-1}{\ln y}\to x \ \ \mbox{ as }y\to 1^-. $$ using for example l'Hopital's rule for the second limit. It follows that the integrand is integrable.
In addition, $$ \frac{\partial}{\partial x}\frac{y^x-1}{\ln y} = \frac{y^x \ln y}{\ln y} = y^x $$ which is continuous for $(x,y)\in (0,1)\times [0,1]$. Thus the hypotheses of the theorem are satisfied.
It follows that for any $x>0$, $$ F'(x) = \int_0^1 y^x dy = \frac 1{x+1}. $$ By integrating, we conclude that there exists some $C\in \R$ such that $$ F(x) = \ln(x+1) + C\qquad\mbox{ for all }x>0 $$ One can also check that $\lim_{x\to 0} F(x) = 0$, so we conclude that $$ \int_0^1 \frac{y^x-1}{\ln y}\,dy = \ln(x+1)\quad\mbox{ for all }x>0. $$

Example 3. Assume that

Compute $$ \frac d{dx}F(x),\quad\mbox { for } \int_{\phi(x)}^{\psi(x)} f(x,y) \, dy . $$

Solution. The way to understand this is to define a function $$ G(x,s,t):= \int_s^t f(x,y)\, dy. $$ Then we know from Theorem 1 that $$ \frac{\partial G}{\partial x}(x,s,t) = \int_s^t \frac{\partial }{\partial x}f(x,y)\, dy, $$ and we know from the Fundamental Theorem of Calculus that $$ \frac{\partial G}{\partial t}(x,s,t) = f(x,t), \qquad \frac{\partial G}{\partial s}(x,s,t) = -f(x,s). $$ Also, we can see that $$ F(x) = G(x, \phi(x), \psi(x)). $$ So we can compute $\frac {dF}{dx}$ by a straightforward application of the Chain Rule $$ \frac{dF}{dx}(x) = \frac{\partial G}{\partial x} + \frac{\partial G}{\partial s}\frac{d \phi}{dx} + \frac{\partial G}{\partial t}\frac{d \psi}{dx} $$ where all derivatives of $G$ on the right-hand side are evaluated at $(x,\phi(x), \psi(x))$. This leads to $$ \frac{dF}{dx}(x) = \int_{\phi(x)}^{\psi(x)}\frac{\partial f}{\partial x}(x,y)\, dy - f(x, \phi(x))\, \phi'(x) + f(x, \psi(x))\, \psi'(x). $$

Example 4.

As an application of the above, assume that $g:[0,\infty)\to \R$ is a continuous function, and let $$ F(x) := \int_0^x (x-y)g(y)\, dy. $$ Then by applying the formula derived in Example 3, we find that $$ F'(x) = \cdots = \int_0^x g(y)\, dy \quad\mbox{ and thus }\quad F''(x) = g(x). $$

Improper integrals

According to the theory we have developed in Section 4.1 - Section 4.2, integrals of the form $$ \idotsint_S f(\bfx)\, d^n\bfx $$ are never well-defined if either

However, such integrals often arise in practice. As in single-variable calculus, we can make sense of them via the theory of improper integrals. (Improper, because such an integral is not equal to $\sup_P s_P f$ or $\inf_P S_Pf$.)

We will consider the cases of unbounded domains and unbounded functions separately.

Unbounded domains

First we state

Definition 1. We will say the improper integral $\idotsint_{\R^n} f(\bfx) d^n\bfx$ is absolutely convergent (or sometimes just the improper integral exists) if there exists $L\in \R$ such that \begin{multline}\label{StoRn1} \forall \ep>0 \exists R>0\mbox{ such that }\forall S\subset \R^n, \\ \mbox{ if }B(R)\subset S, \quad \mbox{ then } \left| \idotsint_S f(\bfx) \, d^n\bfx - L\right| <\ep \end{multline} where we write $B(R)$ as an abbreviation for $B(R,{\bf 0})$. (When we write $\idotsint_S \cdots$, we are tacitly assuming that $S$ is measurable.)

For an absolutely convergent improper integral, we will write $$ \idotsint_{\R^n} f\, d^n\bfx $$ to denote the value of the limit $L$ in \eqref{StoRn1} (generally leaving it up to you, to determine from the context, that this means the absolutely convergent improper integral ....)

Remark 2. If you want to integrate a function $f$ over an unbounded set $T$ other than $\R^n$ (for example, $T$ might be the first quadrant in $\R^2$), then this is defined as the (improper) integral of $\chi_T f$ over $\R^n$, when it is absolutely convergent.

Example 5. Let $p$ be a positive number, and define $ f:\R^2\to \R $ by $$ f(x,y) = \begin{cases} y^{-p} &\mbox{ if } 1\le y< x\\ -x^{-p} &\mbox{ if } 1\le x <y \\ 0&\mbox{ otherwise}. \end{cases} $$

Then one can check that $ \iint_{B(R)} f(x,y)\, dA = 0 $ for every $R>0$. This can be verified directly and is also pretty clear due to the (anti)-symmetry of $f$. Thus $$ \lim_{R\to \infty} \iint_{B(R)} f(x,y)\, dA = 0 $$ for every $p$. However, one can check (see exercises) that if $p\le 2$, then for any $R>0$ and for any real number $z$, there exists a measurable set $S$ (a rectangle, in fact) such that $B(R)\subset S$ and $\iint_S f(x,y)\, dA = z$. For such values of $p$, it follows that the integral is not absolutely convergent.

For $p>2$, it turns out that for $f$ defined above, the integral $\iint_{\R^2}f(x,y)\,dA$ is absolutely convergent. This is a consequence of the following theorem and its corollary.

Theorem 2. Assume that $f:\R^n\to \R$ is continuous, and that \begin{equation}\label{abs.conv.h} \lim_{R\to \infty} \idotsint_{B(R)} |f(\bfx)| d^n\bfx =: M \in [0,\infty)\mbox{ exists}. \end{equation} Then $$ \mbox{ the improper integral }\idotsint_{\R^n} f(\bfx)\, d^n\bfx \mbox{ is absolutely convergent}. $$

Remark 3. We could also write \eqref{abs.conv.h} in the equivalent form \begin{equation}\label{ach2} \sup_{R>0} \idotsint_{B(R)} |f(\bfx)| d^n\bfx < \infty \end{equation} which just means that the set $\{ \idotsint_{B(R)} |f(\bfx)| d^n\bfx : R>0\}$ is bounded. You may find this easier to verify than \eqref{abs.conv.h}.

Explanation: why are \eqref{abs.conv.h} and \eqref{ach2} equivalent?.

Define $$ I(R) := \idotsint_{B(R)} f \, d^n\bfx. $$ Then $I:(0,\infty)\to (0,\infty)$ is a nondecreasing function, and it is a fact that for such a function $$ \lim_{R\to \infty}I(R) \mbox{exists} \quad\iff\quad \sup_{R>0}I(R) <\infty \quad\iff\quad \{ I(R) : R> 0\} \mbox{ is bounded.} $$ The second $\iff$ is just the definition of what it means for the supremum to be finite, and the first $\iff$ follows by the Monotone Sequence Theorem, see Theorem 3 in Section 1.3 (or more precisely, it follows by ideas from the proof of the Monotone Sequence Theorem).

Remark 4. The assumption in Theorem 2 that $f$ is continuous is stronger than necessary; all the theorem requires is that $f$ is integrable on $S$ for every measurable set $S$. We have just assumed continuity to keep things simple. The same applies to Theorem 3 below.

Proof of Theorem 2 (optional!)

First, we'll prove the theorem under the assumption that $f(\bfx)\ge 0$ for all $\bfx$. Under this assumption, define $$ I(R) := \idotsint_{B(R)} f \, d^n\bfx. $$ Our assumption is that $\lim_{R\to \infty}I(R) = M$. We will show that $$ \mbox{ the improper integral } \idotsint_{\R^n} f\, d^n\bfx \mbox{ exists and equals } M. $$ To do this, first note that since $f\ge 0$, it is clear (from basic properties of integration) that $I(R)$ is a nondecreasing function of $R$, and hence that $$ I(R) \le I(R_1)\le M\qquad\mbox{ whenever }R<R_1. $$

Now given $\ep>0$, fix $R$ such that $0< M - I(R) <\ep$. Let $S$ be any measurable set such that $B(R)\subset S$. Since $S$ is measurable, it must be bounded, so there exists $R_1>R$ such that $S\subset B(R_1)$. Then $f \chi_{B(R)} \le f\chi_S \le f \chi_{B(R_1)}$ everywhere, so $$ M-\ep < I(R) = \idotsint_{B(R)} fd^n\bfx\ \le \idotsint_{S} fd^n\bfx\ \le \idotsint_{B(R_1)} f d^n\bfx\ =I(R_1) \le M. $$ This implies that $|\idotsint_{S} f d^n\bfx - M|<\ep$. Since $S$ was arbitrary, this proves the theorem when $f\ge 0$ everywhere.

In the general case, we write $f$ as a linear combination of two nonnegative functions,as follows. $$ f(\bfx) = f^+(\bfx) - f^-(\bfx), \quad\mbox{ where }\begin{cases} &f^+(\bfx) = \frac 12 (|f(\bfx)|+f(\bfx) )\\ &f^-(\bfx) = \frac 12 (|f(\bfx)| - f(\bfx)) \end{cases} $$ (This idea is used very often in certain branches of mathematics.) Then $f^+$ is a nonnegative function, and $ |f^+(\bfx)| \le |f(\bfx)| $ for all $\bfx$. Since we have already proved the theorem for nonnegative functions, it follows that the improper integral of $f^+$ on $\R^n$ exists. Exactly the same applies to $f^-$. Let $$ L^+ := \idotsint_{\R^n}f^+\, d^n\bfx, \qquad L^- := \idotsint_{\R^n}f^-\, d^n\bfx. $$ Given $\ep>0$, we can choose $R$ so large that $$ \left| L^+ - \idotsint_{S}f^+\, d^n\bfx\right|<\frac \ep 2, \qquad \left| L^- - \idotsint_{S}f^-\, d^n\bfx\right|<\frac \ep 2 $$ whenever $B(R)\subset S$. Let $L = L^+-L^-$. Then it follows by basic properties of integation and the triangle inequality that \begin{align} \left| L - \idotsint_{S}f\, d^n\bfx\right| &= \left| L^+ - L^- - \idotsint_{S}(f^+-f^-)\, d^n\bfx \right|\nonumber \\ &\le \left| L^+ - \idotsint_{S}f^+\, d^n\bfx\right| + \left| L^- - \idotsint_{S}f^-\, d^n\bfx\right| \nonumber \\ &< \frac \ep 2+\frac \ep 2 = \ep. \end{align} This proves that the improper integral of $f$ over $\R^n$ is absolutely convergent. $\quad \Box$

A potential drawback of Theorem 2 is that you may find it difficult to check whether a function $f$ satisfies the hypotheses. The next result, a corollary of Theorem 2, gives a criterion for absolute convergence of improper integrals that is very straightfoward to check.

Corollary 1. If $f:\R^n\to \R$ is continuous and $$ \mbox{ there exist }C>0 \mbox{ and }p>n\mbox{ such that } |f(\bfx)|\le C |\bfx|^{-p} \mbox{ for all }\bfx\in \R^n, $$ then $\idotsint_{\R^n}f \, d^n\bfx$ is absolutely convergent.

The proof uses the fact that if $p>n$, then \begin{equation}\label{fact} \sup_{R>0} \idotsint_{A(1,R)} |\bfx|^{-p}d^n\bfx <\infty, \mbox{ where }A(1,R):= \{\bfx\in \R^n : 1\le |\bfx| \le R\}. \end{equation} This was covered in a tutorial. It can also be verified directly, when $n=2$ or $3$, by using polar or spherical coordinates.

Proof.

Since $f$ is continuous, we know that $\int_{B(1)}|f(\bfx)|d^n\bfx$ exists and is finite. For any $R>1$, basic properties of integration and the hypothesis $|f(\bfx)|\le C|\bfx|^{-p}$ imply that \begin{align} \idotsint_{B(R)}|f(\bfx)|d^n\bfx & = \idotsint_{B(1)}|f(\bfx)|d^n\bfx + \idotsint_{A(1,R)} |f(\bfx)| d^n\bfx \nonumber \\ &\le \idotsint_{B(1)}|f(\bfx)|d^n\bfx + C\idotsint_{A(1,R)}|\bfx|^{-p}d^n\bfx. \nonumber \end{align} Then the fact \eqref{fact} recalled above implies that $$ \sup_{R>0} \idotsint_{B(R)}|f(\bfx)| d^n\bfx . <\infty $$ Hence (by Theorem 2, see also Remark 3) $\idotsint_{\R^n} f\, d^n\bfx$ is absolutely convergent. $\quad\Box$

Next, perhaps the most famous example of an improper integral in all of mathematics.

Example 6. Consider the $1$-d improper integral $$ I := \int_{-\infty}^\infty e^{-x^2}\, dx. $$ This improper integral exists, but since the antiderivative of $e^{-x^2}$ is not an elementary function, it appears to be impossible to evaluate. But check this out: $$ I^2 = I \cdot I = \left(\int_{-\infty}^\infty e^{-x^2}\, dx\right)\left(\int_{-\infty}^\infty e^{-y^2}\, dy\right) = \int_{-\infty}^\infty \int_{-\infty}^\infty e^{-x^2} e^{-y^2} \, dx\, dy . $$ Of course $e^{-x^2}e^{-y^2} = e^{-(x^2+y^2)}$. Thus, changing to polar coordinates, we get $$ I^2 = \int_{\R^2} e^{-(x^2+y^2)} dA = \int_0^{2\pi} \int_0^\infty e^{-r^2}r\, dr\, d\theta. $$ By the change of variables $u = r^2, du = 2r\, dr$, we can easily evaluate this integral. (What makes this possible is the factor of $r$ that comes from the Jacobian determinant in the change of variables between Cartesian and polar coordinates.) We conclude that $$ I^2 = \pi, \qquad\mbox{ and thus }I = \sqrt{\pi} = \int_{-\infty}^\infty e^{-x^2}\, dx. $$

This computation is one of the reasons that Gauss, and the Gaussian (that is, the function $\frac 1{\sigma \sqrt{2\pi}}e^{-x^2/2\sigma^2}$, normalized so that its integral is $1$), appeared on old German 10 Deutschmark notes, before the introduction of the Euro.

drawing

Note that in the above computation, we did not worry at all about the fact that the integrals were all improper. It is a good exercise to go through the argument carefully and check that everything is justified, using things we know about improper integrals (starting of course from the definition).

Unbounded functions on bounded domains

Now let $S$ be a measurable subset of $\R^n$, and for a point $\bfa\in S^{int}$, and consider a continuous but unbounded function $f:S\setminus\{ \bfa \}\to \R$.

An example to keep in mind is $f(\bfx) = |\bfx - \bfa|^{-p}$ for some $p>0$.

Definition 2. For continuous $f:S\setminus \{ \bfa \}\to \R$, we say the improper integral $\idotsint_{S} f(\bfx) d^n\bfx$ is absolutely convergent (or sometimes just the improper integral exists) if there exists $L\in \R$ such that \begin{multline}\label{limS} \forall \ep>0 \exists r>0\mbox{ such that }\forall U\subset S\mbox{ with } \bfa\in U^{int}\\ U\subset B(r, \bfa) \Rightarrow \left| \idotsint_{S\setminus U} f(\bfx) \, d^n\bfx - L\right| <\ep, \end{multline} (When we write $\idotsint_{S\setminus U} \cdots$, we are tacitly assuming that $U$ is measurable, and hence that $S\setminus U$ is also measurable.)

For an absolutely convergent improper integral, we will write $$ \idotsint_{S} f\, d^n\bfx $$ to denote the value of the limit $L$ in \eqref{limS} (possibly leaving it up to the reader to determine from the context that this means the absolutely convergent improper integral ....)

The basic theorems in this situation are parallel to the case of a continuous function on an unbounded domain.

Theorem 3. Assume that $f:S\setminus \{{\bfa}\}\to \R$ is continuous, and that $$ \lim_{r\to 0} \idotsint_{S\setminus B(r, \bfa)} |f(\bfx)| d^n\bfx \ \mbox{ exists (and is finite)}. $$ Then $$ \mbox{ the improper integral }\idotsint_{S} f(\bfx)\, d^n\bfx \mbox{ is absolutely convergent}. $$

As with Theorem 2, the hypothesis is equivalent to $$ \sup_{r>0} \idotsint_{S\setminus B(r,\bfa)} |f(\bfx)| d^n\bfx < \infty \quad\mbox{i.e. }\quad \{ \idotsint_{S\setminus B(r,\bfa)} |f(\bfx)| d^n\bfx : r>0\}\mbox{ is bounded.} $$

Corollary 2. Assume that $S$ is a bounded measurable subset of $\R^n$ that contains the origin, and that $f:S\setminus \{{\bf 0}\}$ is is continuous. If
$$ \exists C>0\mbox{ and }p<n\mbox{ such that } |f(\bfx)| \le C|\bfx|^{-p}\quad\mbox{ for all }\bfx\in S\setminus \{{\bf 0}\}, $$ then $$ \mbox{ the improper integral }\idotsint_{S} f(\bfx)\, d^n\bfx \mbox{ is absolutely convergent}. $$

We omit the proofs, which are very similar to the case of continuous functions on unbounded domains.

Problems

Basic skills

  1. Determine whether the following improper integrals are absolutely convergent. Justify your answer by appealing to a theorem and checking that its hypotheses are satisfied.

  2. Let $\rho:\R^2\to \R$ be a continuous function, and assume that there exist $C,p>0$ such that $|\rho(\bfx)| \le C|\bfx|^{-p}$ for all $\bfx\in \R^2$. For which values of $p$ is the integral $$ \iint_{\R^2} \frac1{2\pi} \ln|\bfx - \bfa|\rho(\bfx) \, dA$$ absolutely convergent? Justify your answer by appealing to an appropriate theorem or theorems.
    Remark. On second thoughts, this question is too hard to be an example of a basic skill.

  3. Be able to exchange differentiation and integration, when justified, to carry out computations. For example:

Other questions

  1. Carefully go through the computation in Example 6 of $\int_\R e^{-x^2}\, dx$ and justify all the steps in the argument. (See also the next problem, which asks more or less the same thing.)

  2. Prove that if $f:\R^2\to \R$ is continuous and $\iint_{\R^2} f(\bfx) d^2\bfx$ is absolutely convergent, then $$ \iint_{\R^2} f(\bfx) d^2\bfx = \int_0^{2\pi}\int_0^\infty f(r\cos\theta, r\sin\theta)\, r\,dr\,d\theta. $$ The reason this is not an obvious consequence of the Change of Variables Theorem is that we only know that the theorem holds for (proper) integrals on (bounded) measurable sets.

  3. Prove conclusion (a) of Theorem 1.

  4. Prove Theorem 3 or Corollary 2 (by adapting the proof of Theorem 2 or Corollary 1.)

  5. (not recommended -- to be honest, it is only here in the exercises because our typists did not feel like typing out the details in the example.) For the function $f$ defined in Example 5, check that if $p\le 2$, then for any $R>0$ and for any real number $z$, there exists a rectangle $S$ such that $B(R)\subset S$ and $\iint_S f(x,y)\, dA = z$.
    Hint. To do this, fix $z\in \R$ and $R>0$. Assume for concreteness that $z>0$, and compute the integral $\iint_S f(x,y)\, dA$ for $S := [-R,R]\times [-R_1, R_1]$, for $R_1\ge R$. First check that $$ \iint_{[-R,R]\times [-R,R]}f(x,y)\, dA = 0 $$ and hence that $$ \iint_S f(x,y) \, dA = \int_0^R\int_R^{R_1} f(x,y) \, dy\, dx = R\int_R^{R_1}y^{-p}\, dy=: g(R_1). $$ To show that there exists $R_1$ such that $g(R_1)=z$, it suffices (by the Intermediate Value Theorem, since $g$ is clearly continuous and $g(R)=0$) to show that and $g(R_1)\to \infty$ as $R_1\to \infty$.
    How would you do things differently for $z<0$?

    $\Leftarrow$  $\Uparrow$  $\Rightarrow$