Table of Contents

  1. Introduction
  2. Notations
  3. $H^{s,\theta}$ spaces
  4. Postlude

Introduction ↩

Consider the Cauchy problem $\Box u = N(u)$ with initial data $(f,g) \in H^s \times H^{s-1}$ where $\Box = - \partial_t^2 + \Delta$. Assume that

  1. $N$ is local in time, that is for any $I \subset R$, the value of $N$ depends on on the value of $u$ on $I \times R^n$.
  2. $N$ is translation invariant.
  3. $N(0) = 0$.

By local welposedness, we mean that for any pair $(f,g) \in H^s \times H^{s-1}$ there exists a $T>0$ such that there exists $u \in C([0,T]; H^s )\cap C^1 ([0,T], H^{s-1})$ solves the equation in the sense of distribution. Moreover, $T>0$ is bounded below by a positive and continuous function of the $H^s$ size of the initial data.

The map $(f,g) \rightarrow u$ is locally Lipschitz. To prove the local wellposedness, we use Picard iteration. We define $u_{j+1} = u_0 + \Box^{-1} N(u_j)$ where $u_0$ is the homogenous solution with the initial data given by $f,g$ and $\Box^{-1}$ is defined by $V = \Box^{-1} u$ means that $\Box v = N(u)$ together with zero initial data.

To apply the Picard iteration idea, we need to find the right space in which we have a contraction. Today, we will describe such a space and show how it can be used to solve problems like this.

Notations ↩

$$\widehat{ \Lambda^\alpha f} (\xi) = (1 + |\xi|^2)^{\alpha/2} \widehat{f}(\xi).$$

$$\widehat{ \Lambda_{+}^{\alpha} f }(\tau, \xi) = (1 + |\tau| + |\xi|)^{\alpha} \widehat{f}(\tau, \xi).$$

$$\widehat{ \Lambda_{-}^\alpha f} (\xi) = (1 + |\tau| - |\xi|)^{\alpha} \widehat{f}(\tau, \xi).$$

Similarly, we define the analogous homogeneous operators $D^\alpha, D_{+}^\alpha, D_{-}^+$.

We will say that $u \tilde \leq \tilde v$ if $|\widehat{u}| \leq |\widehat{v}|$.

If $X$ is a normed space whose value only depends upon the absolute value of the Fourier transform, we say the norm is compatible with the relation $\tilde \leq \tilde iff \| u \|X \leq \|v \|X $ whenever $u \tilde \leq \tild v$.

$H^{s,\theta}$ spaces ↩

We define $H^{s,\theta} = { u : \Lambda^s \lambda^\theta_{-} \in L^2, u \in S’} $ with the norm

$$\| u \|{H^{s,\theta}} = \| \Lambda^s \Lambda^\theta{-} u \|_{L^2}$$.

Remarks:

  1. The $H^{s,\theta}$ norm depends only on the Fourier transform.
  2. The dual space of $H^{s,\theta}$ is the space $H^{-s, -\theta}$.

Representations:

If $u \in H^{s,\theta}$ then we can write $$ u(t) = \frac{1}{2\pi} \int_R \frac{e^{it \lambda} u_\lambda (t)}{(1+|\lambda|)^\theta} d\lambda $$ where $\Box u_\lambda = 0, ~u_\lambda (0) = f_\lambda$. Moreover, $\| u \|{H^{s,\theta}}^2 = \int \| f\lambda\|^2 d\lambda.$

(Wave to the general; parabolic level set decomposition)

Theorem: $H^{\frac{n}{2} - \frac{n}{r} - \frac{1}{q}, \theta} \rightharpoonup L^q_t L^r_x$ if $(q,r)$ is $H^s$-wave-admissible. (See handout for definitions), provided that $\theta > \frac{1}{2}$.

This can be proved quite easily. Wave to the general….basically using the fact that $\theta > 1/2$.

We have various embeddings based on the Sobolev inequality.

The representation formula lets us recast bilinear estimate on spacetime functions in terms of an underlying estimate for solutions of the linear wave equation with the right hand side depending upon the initial data size in $H^s$.

Lemma: If $\alpha > 0$ then $$ D^\alpha_{-} (uv) \tilde \leq \tilde (D_{-}^\alpha u ) v+ u (D^\alpha_{-} v) + R^\alpha (u,v)$$ for all $u,v$ with nonnegative Fourier transform. Moreover, we have a representation $$ \mathcal{F} (R^\alpha (u,v)) (\tau, \xi) = \int [r (\tau - \lambda; \lambda, \eta)]^\alpha \widehat{u}(\tau - \lambda, \xi - \eta) \widehat{v} (\lambda, \eta) d\lambda d\eta $$ where $ r(\tau, \xi; \eta, \lambda = ) $ explicit based on cases of whether $\tau \lambda \geq 0$ or $\tau \lambda > 0$.

Theorem F: If $n \geq 2, ~\gamma, ~\gamma_1,~ s_1, s_2$ satisfy the hypotheses of Theorem C (handout) with $\gamma_+ = 0$ then

$$ \| D^\gamma R^{\gamma_{-}} (u,v) \|{L^2 (R^{1+n})} \lesssim \| D^{s1} u \|{0, \theta} \| D^{s2} v \|_{0, \theta} $$ provided $\theta > 1/2.$

Let’s see what we can get from this estimate.

A.1 Proposition: Let $n \geq 1,~a,~b,~c,~\alpha,~\beta, ~\gamma \geq 0$ then $H^{a, \alpha} \cdot H^{b, \beta} \rightharpoonup H^{-c, -\gamma}$ provided that $a + b + c > \frac{n}{2}$ and $\alpha + \beta + \gamma > \frac{1}{2}$.

Proof: By Hölder’s inequality, we have $L^\infty_t L^2_x \cdot L^2_t L^\infty_x \rightharpoonup L^2_t L^2_x = H^{0,0}$. We also have $L^\infty_{t,x} \cdot L^2_{t,x} \rightharpoonup L^2_{t,x}$. We also have $H^{s,0} \cdot H^{0, \theta} \rightharpoonup H^{0,0}$ and $H^{s, \theta} \cdot H^{0,0} \rightharpoonup H^{0,0}$.

Let $ s = a + b + c > \frac{n}{2}$ and set $\theta = \alpha + \beta + \gamma > \frac{1}{2}$. We have $H^{s, \alpha + \beta} \cdot H^{0, \gamma} \rightharpoonup H^{0,0}$. We also can obtain estimates from duality to obtain $H^{0,0} \cdot H^{s, \alpha + \beta} \rightharpoonup H^{0, -\gamma}$. We can then interpolate between these estimates.

Theorem: Let $n \geq 2, ~ s > \frac{n}{2}, ~ \frac{1}{2} < \theta \leq s - \frac{n-1}{2}.$ Then $$ H^{a,\alpha} \cdot H^{s, \theta} \rightharpoonup H^{a, \alpha}, $$ for all $a, \alpha $ satisfying $ 0 \leq \alpha \leq \theta, ~ - s + \alpha < a \leq s$.

Proof: It suffices to prove

These are shown to suffice based on the Leibniz rule and some interpolations.

Postlude ↩