Dror Bar-Natan: Classes: 2003-04: Math 157 - Analysis I: (115) Next: Class Home
Previous: Final Exam

Solution of the Final Exam

this document in PDF: Solution.pdf

Problem 1. We say that a set $ A$ of real numbers is dense if for any open interval $ I$, the intersection $ A\cap I$ is non-empty.

  1. Give an example of a dense set $ A$ whose complement $ A^c=\{x\in{\mathbb{R}}: x\not\in A\}$ is also dense.
  2. Give an example of a non-dense set $ B$ whose complement $ B^c=\{x\in{\mathbb{R}}: x\not\in B\}$ is also not dense.
  3. Prove that if $ f\colon{\mathbb{R}}\to{\mathbb{R}}$ is an increasing function ($ f(x)<f(y)$ for $ x<y$) and if the range $ \{f(x): x\in{\mathbb{R}}\}$ of $ f$ is dense, then $ f$ is continuous.

Solution.

  1. Take for example $ A={\mathbb{Q}}$, the set of rational numbers. Then $ A^c$ is the set of irrational numbers. We've seen in class that between any two (different) numbers (i.e., within any open interval) there is a rational number and there is an irrational number. Hence both $ A$ and $ A^c$ are dense.
  2. Take for example $ B=[0,\infty]$, the set of non-negative numbers. Then $ B^c=(-\infty,0)$ is the set of negative numbers. The set $ B$ is not dense because, for example, it's intersection with the interval $ (-2,-1)$ is empty. The set $ B^c$ is not dense because, for example, it's intersection with the interval $ (1,2)$ is empty.
  3. We have to show that for every $ a\in{\mathbb{R}}$ and for every $ \epsilon>0$ there is a $ \delta>0$ so that $ \vert x-a\vert<\delta$ implies $ \vert f(x)-f(a)\vert<\epsilon$. So let $ \epsilon>0$ be given. By the density of $ A:=\{f(x): x\in{\mathbb{R}}\}$ we know that we can find an element of $ A$ in the interval $ (f(a)-\epsilon,f(a))$ and another element of $ A$ in the interval $ (f(a),f(a)+\epsilon)$. That is, we can find $ x_1$ and $ x_2$ so that $ f(a)-\epsilon<f(x_1)<f(a)$ and $ f(a)<f(x_2)<f(a)+\epsilon$. It follows from the monotonicity of $ f$ that $ x_1<a$ and that $ a<x_2$. Now set $ \delta=\min(a-x_1,x_2-a)$ (this is a positive number because $ x_1<a$ and $ a<x_2$). Finally if $ \vert x-a\vert<\delta$ then $ x$ is in the interval $ (a-\delta,a+\delta)\subset(a-(a-x_1),a+(x_2-a))=(x_1,x_2)$. By the monotonicity of $ f$ it follows that $ f(x)$ is in the interval $ (f(x_1),f(x_2))\subset(f(a)-\epsilon,f(a)+\epsilon)$, and so $ \vert f(x)-f(a)\vert<\epsilon$, as required.

Problem 2. Sketch the graph of the function $ y=f(x)=xe^{-x^2/2}$. Make sure that your graph clearly indicates the following:

Solution. Our function is defined for all $ x$. As $ x$ goes to $ \pm\infty$ exponentials dominate polynomials, and so certainly $ e^{x^2/2}$ gets much bigger than $ x$. So $ \lim_{x\to\pm\infty}f(x)=0$. Solving the equation $ xe^{-x^2/2} = 0$ we see that the only intersection of the graph of $ f$ with the axes is at $ (0,0)$. We can compute $ f'(x) =
x'e^{-x^2/2}+x\left(e^{-x^2/2}\right)' = e^{-x^2/2}-x^2e^{-x^2/2} =
(1-x^2)e^{-x^2/2}$ and $ f''(x) =
(1-x^2)'e^{-x^2/2}+(1-x^2)\left(e^{-x^2/2}\right)' =
-2xe^{-x^2/2}-x(1-x^2)e^{-x^2/2}=x(x^2-3)e^{-x^2/2}$. Solving $ f'(x)=0$ we see that the only critical points are when $ 1-x^2=0$. That is, at $ x=\pm 1$. As $ f''(1)=-2e^{-1/2}<0$, the point $ (1,f(1))=(1,e^{-1/2})$ is a local max. As $ f''(-1)=2e^{-1/2}>0$, the point $ (-1,f(-1))=(-1,-e^{-1/2})$ is a local min. As there are no other critical points and the behaviour of $ f$ near the ends of its domain of deifnition is mute (as determined before), $ (1,e^{-1/2})$ is actually a global max and $ (-1,-e^{-1/2})$ is actually a global min. Thus overall the graph is:

Plot of f(x)

Problem and Solution 3. Compute the following derivative and the following integrals:

  1. Using the fundamental theorem of calculus in the form $ \frac{d}{du}\int_0^u f(t)dt=f(u)$ and the chain rule with $ u=\sin x$ we get

    $\displaystyle \frac{d}{dx}\left(\int_0^{\sin x}\sqrt{\arcsin t} dt\right)
= \sqrt{\arcsin\sin x}\cdot(\sin x)'
= \sqrt{x}\cos x.
$

  2. We make the substitution $ u=\sqrt{x}$ (and thus $ x=u^2$ and $ dx=2udu$) to compute

    $\displaystyle \int\frac{e^{\sqrt{x}}}{\sqrt{x}}dx
= \int\frac{e^u}{u}2udu
= 2\int e^udu
= 2e^u+C
= 2e^{\sqrt{x}}+C.
$

  3. Integrating by parts twice we get

    $\displaystyle \int x^2e^xdx
= x^2e^x-\int 2xe^x dx
= x^2e^x-2xe^x+\int 2e^x dx
= x^2e^x-2xe^x+2e^x+C.
$

  4. We make the substitution $ u=2^x$ (and thus $ x=\log_2u$ and $ dx=\frac{du}{u\log 2}$) to compute

    $\displaystyle \int\frac{4^xdx}{2^x+1}
= \int\frac{u^2\frac{du}{u\log 2}}{u+1}
=...
...log 2}\int\frac{udu}{u+1}
= \frac{1}{\log 2}\int\left(1-\frac{1}{u+1}\right)du
$

    $\displaystyle = \frac{1}{\log 2}(u-\log\vert u+1\vert)+C
= \frac{1}{\log 2}(2^x-\log\vert 2^x+1\vert)+C.
$

  5. We use the factorization $ x^2-3x+2=(x-1)(x-2)$ to get

    $\displaystyle \int\frac{dx}{x^2-3x+2}
= \int\frac{dx}{(x-1)(x-2)}
= \int\left(\frac{dx}{x-2}-\frac{dx}{x-1}\right)
$

    $\displaystyle = \log\vert x-2\vert-\log\vert x-1\vert+C
= \log\left\vert\frac{x-2}{x-1}\right\vert+C.
$

Problem 4. In solving this problem you are not allowed to use any properties of the exponential function $ e^x$.

  1. Two differentiable functions, $ e_1(x)$ and $ e_2(x)$, defined over the entire real line $ {\mathbb{R}}$, are known to satisfy $ e_1'(x)=e_1(x)$, $ e_2'(x)=e_2(x)$, $ e_1(x)>0$ and $ e_2(x)>0$ for all $ x\in{\mathbb{R}}$ and also $ e_1(0)=e_2(0)$. Prove that $ e_1$ and $ e_2$ are the same. That is, prove that $ e_1(x)=e_2(x)$ for all $ x\in{\mathbb{R}}$.
  2. A differentiable function $ e(x)$ defined over the entire real line $ {\mathbb{R}}$ is known to satisfy $ e'(x)=e(x)$ and $ e(x)>0$ for all $ x\in{\mathbb{R}}$ and also $ e(0)=1$. Prove that $ e(x+y)=e(x)e(y)$ for all $ x,y\in{\mathbb{R}}$.

Solution.

  1. Set $ f(x):=e_1(x)/e_2(x)$ (this is well defined because $ e_2(x)$ is never 0) and compute

    $\displaystyle f'
=\left(\frac{e_1}{e_2}\right)'
=\frac{e_1'e_2-e_1e_2'}{e_2^2}
=\frac{e_1e_2-e_1e_2}{e_2^2}
=0.
$

    So $ f$ is a constant. But $ f(0)=e_1(0)/e_2(0)=1$, so that constant is 1 and $ e_1(x)/e_2(x)=1$ for all $ x$. This means that $ e_1=e_2$.
  2. Fix $ y$ and set $ e_1(x)=e(x+y)$ and $ e_2(x)=e(x)e(y)$. Then $ (e_1(x))'=(e(x+y))'=e(x+y)=e_1(x)$ and $ (e_2(x))'=(e(x)e(y))'=(e(x))'e(y)=e(x)(e(y)=e_2(x)$ and $ e_1(0)=e(0+y)=e(y)=1e(y)=e(0)e(y)=e_2(0)$. All the other conditions of the first part of this question are even easier to verify, and so the conclusion of that part holds. Namely, $ e_1=e_2$, which means $ e(x+y)=e(x)e(y)$.

Problem 5. In solving this problem you are not allowed to use any properties of the trigonometric functions.

  1. A twice-differentiable function $ c(x)$ defined over the entire real line $ {\mathbb{R}}$ is known to satisfy $ c''(x)=-c(x)$ for all $ x\in{\mathbb{R}}$ and also $ c(0)=c'(0)=0$. Write out the degree $ n$ Taylor polynomial $ P_{n,a,c}(x)$ of $ c$ at $ a=0$.
  2. Write a formula for the remainder term $ R_{n,0,c}(x):=c(x)-P_{n,0,c}(x)$. (To keep the notation simple, you are allowed to assume that $ n$ is even or even that $ n$ is divisible by 4).
  3. Prove that $ c$ is the zero function: $ c(x)=0$ for all $ x\in{\mathbb{R}}$.

Solution.

  1. From $ c''(x)=-c(x)$ it is clear that $ c^{(2k)}=(-1)^kc$ and that $ c^{(2k+1)}=(-1)^kc'$. So $ c^{(2k)}(0)=(-1)^kc(0)=0$ and $ c^{(2k+1)}(0)=(-1)^kc'(0)=0$ and hence all the coefficients of $ P_{n,a,c}(x)$ are 0. In other words, $ P_{n,a,c}=0$.
  2. If $ n$ is divisible by $ 4$ then $ c^{(n+1)}=c'$ and so the remainder formula says that for any $ x\neq 0$ there is a $ t$ between 0 and $ x$ for which

    $\displaystyle R_{n,0,c}(x)
= \frac{c^{(n+1)}(t)}{(n+1)!}x^{n+1}
= \frac{c'(t)}{(n+1)!}x^{n+1}.
$

  3. Factorials grow faster then exponentials, so in the remainder formula the denominator $ (n+1)!$ grows faster then the term $ x^{n+1}$, while the numerator $ c'(t)$ is bounded (by the theorem that a continuous function on a closed interval is bounded). So the remainder goes to 0 when $ n$ goes to $ \infty$, and hence $ \lim_{n\to\infty}P_{n,a,c}(x)=c(x)$. But $ P_{n,a,c}(x)=0$ for all $ n$, so necessarily $ c(x)=0$.

Remark 1. Two alternative forms of the remainder formula are

$\displaystyle \frac{c^{(n+1)}(t)}{n!}x(x-t)^n = \frac{c'(t)}{n!}x(x-t)^n$   and$\displaystyle \quad
\int_0^x\frac{c^{(n+1)}(t)}{n!}(x-t)^ndt = \int_0^x\frac{c'(t)}{n!}(x-t)^ndt.
$

Either one of those could equaly well be used to solve part 3 of the problem.

Remark 2. There is an alternative approach to the whole problem; start with part 3 and go backwards. To do part 3, consider the function $ f:=c^2+(c')^2$. We have $ f'=2cc'+2c'c''=2cc'-2c'c=0$, so $ f$ is a constant function. But $ f(0)=c(0)^2+c'(0)^2=0^2+0^2=0$, so $ f$ must be the 0 function. But $ f$ is a sum of squares, and the only way a sum of squares can be 0 is if each summand is 0. So $ c^2=0$ and hence $ c=0$ as required in part 3. But if $ c$ is the 0 function then its Taylor polynomials are all 0 and the remainder terms are also all 0, solving parts 1 and 2 as well. This is not the solution I had in mind when I wrote the problem, but people who solved the problem this way got full credit.

Problem 6. In solving this problem you are not allowed to use the irrationality of $ \pi$, but you are allowed, indeed advised, to borrow a few lines from the proof of the irrationality of $ \pi$.

Is there a non-zero polynomial $ p(x)$ defined on the interval $ [0,\pi]$ and with values in the interval $ [0,\frac12)$ so that it and all of its derivatives are integers at both the point 0 and the point $ \pi$? In either case, prove your answer in detail.

Solution. There is no such polynomial. Had there been one, we would have

$\displaystyle 0<\int_0^\pi p(x)\sin x  dx<\int_0^\pi\frac12\sin x dx=1, $

but also, by repeated integration by parts (an even number of times, for simplicity),

$\displaystyle \int_0^\pi p(x)\sin x  dx
=\left.-p(x)\cos x\right\vert _0^\pi + \int_0^\pi p'(x)\cos x dx
$

$\displaystyle =\left.-p(x)\cos x + p'(x)\sin x\right\vert _0^\pi - \int_0^\pi p''(x)\sin x  dx
=\ldots
$

$\displaystyle = \left.\text{
(terms involving $\pm 1$, $p^{(k)}(x)$, $\sin x$ and $\cos x$)
}\right\vert _0^\pi \pm p^{(2n)}(x)\sin x dx.
$

For any $ n$ the first term in this formula involves only integers (as $ p^{(k)}(0)$, $ p^{(k)}(\pi)$, $ \sin 0$, $ \sin\pi$, $ \cos 0$ and $ \cos\pi$ are all integers), and if $ 2n$ is larger than the degree of $ p$, the second term is 0. So $ \int_0^\pi p(x)\sin x  dx$ is an integer. But by the first formula it is in $ (0,1)$. That can't be.

The results. 80 students took the exam; the average grade was 69.33/120, the median was 71.5/120 and the standard deviation was 26.51. The overall grade average for the course (of $ X=0.05T_1+0.15T_2+0.1T_3+0.1T_4+0.2HW+0.4\cdot 100(F/120)$) was 68.5, the median was 71.57 and the standard deviation was 18.64. Finally, the transformation $ X\mapsto 100(X/100)^\gamma$ was applied to the grades, with $ \gamma=0.92$. This made the average grade 70.41, the median 73.5 and the standard deviation 17.77. There were 30 A's (grades higher or equal to 80) and 12 failures (grades below 50).

The generation of this document was assisted by LATEX2HTML.


Dror Bar-Natan 2004-05-10