$\renewcommand{\Re}{\operatorname{Re}}$
$\renewcommand{\Im}{\operatorname{Im}}$
$\newcommand{\erf}{\operatorname{erf}}$
$\newcommand{\dag}{\dagger}$
$\newcommand{\const}{\mathrm{const}}$
$\newcommand{\arcsinh}{\operatorname{arcsinh}}$
$\newcommand{\diag}{\operatorname{diag}}$
##2.8. Hyperbolic first order systems with one spatial variable
-------------------------------
> 1. [Definition](#sect-2.8.1)
> 2. [Completely separable systems](#sect-2.8.2)
> 3. [IVP (Cauchy problem)](#sect-2.8.3)
> 4. [IBVP](#sect-2.8.4)
> 5. [IBVP: Compatibility condition](#sect-2.8.5)
> 6. [General case](#sect-2.8.6)
###Definition
We consider system
\begin{equation}
E U\_t + AU\_x+ BU=F
\label{eq-2.8.1}
\end{equation}
where $E,A,B$ are $n\times n$-matrices, $U$ is unknown $n$-vector (column) and
$F$ is known $n$-vector (column).
###Completely separable systems
Assume that $E$ and $A$ are constant matrices, $E$ is non-degenerate, $E^{-1}A$ has real eigenvalues $\lambda\_1,\ldots, \lambda\_n$ and is diagonalisable: $E^{-1}A= Q^{-1}\Lambda Q$ with $\Lambda =\diag(\lambda\_1,\ldots, \lambda\_n)$ (diagonal matrix with $\lambda\_1,\ldots, \lambda\_n$ on the diagonal.
Then substituting $U=QV$ (or $V=Q^{-1}U$) we have
\begin{equation\*}
QV\_t + E^{-1}AQV\_x+ E^{-1}BQV=E^{-1}F
\end{equation\*}
or
\begin{equation}
V\_t + \Lambda V\_x+ Q^{-1}E^{-1}BQV=Q^{-1}E^{-1}F.
\label{eq-2.8.2}
\end{equation}
In particular if $Q^{-1}E^{-1}BQ$ is also a diagonal matrix: $Q^{-1}E^{-1}BQ=\diag(\alpha\_1,\ldots,\alpha\_n)$ (which is the case provided $B=0$)
we have $n$ separate equations
\begin{equation}
V\_j,\_t +\lambda\_j V\_j,\_x +\alpha\_j V\_j = f_j
\label{eq-2.8.3}
\end{equation}
and we can apply the theory of [Section 2.1](./S2.1.html).
**Definition 1.**
Lines $x-\lambda\_j t= \const$ are *characteristics*, $V\_j$ are called *Riemannian invariants*. If $\alpha\_j=0$, $f\_j=0$ these Riemannian invariants are constant along characteristics.
###IVP (Cauchy problem)
Consider Cauchy problem: $U|\_t=0= G(x)$, $x\in \mathbb{R}$.
**Proposition 1.**
1. Let $E=I$, $A=\Lambda$ (already diagonalized) and $B=0$. Then $U\_j$ at point $P)$ is defined by $G\_j(P\_j)$ and $F\_j$ on a segment of characteristics connecting $P$ and $P\_j$ where $P\_j $ is an intersection of $x-\lambda\_j t=\const$ passing through $P$ and $\\{t=0\\}$; ($j=1,\ldots,n$).
2. Let $B=0$. Then $U$ at point $P$ is defined by $G(P\_j)$ and by $F$ on segments of characteristics connecting $P$ and $P\_j$; ($j=1,\ldots,n$).
*Proof.* The first statement is obvious and the second follows from it. Note that transform by $Q$ messes up components of $U$, and $F$, and $G$.
###IBVP
Consider now equations (\ref{eq-2.8.3}) in domain $\Omega=\\{t>0, x> \mu t\\}$ assuming that the *lateral boundary* $\Gamma=\\{x=\mu t, t>0\\}$ is not a characteristic, i.e. $\mu$ is not one of the numbers $\lambda\_1,\ldots,\lambda\_n$. Then (renumbering Riemannian invariants if necessary) we have
\begin{equation}
\lambda\_1\le \ldots \le \lambda\_m < \mu <\lambda\_{m+1}\le \ldots \le \lambda\_n.
\label{eq-2.8.4}
\end{equation}
Then with equation in $\Omega$ and initial data on $\\{t=0, x>0\\}$ we can find $V\_1,\ldots,V\_m$ everywhere in $\Omega$ and thus on $\Gamma$; we call $V\_1,\ldots,V\_m$ *incoming Riemannian invariants*. On the other hand, we define this way $V\_{m+1},\ldots, V\_n$ only as $x\ge \lambda\_{m+1},\ldots, \lambda\_n$ respectively and therefore not on $\Gamma$; we call them *outgoing Riemannian invariants*.
To define *outgoing Riemannian invariants* $V\_{m+1},\ldots, V\_n$ on $\Gamma$ and thus in the rest of $\Omega$ we need boundary condition $CU|\_\Gamma =H$ where $C$ is $(n-m)\times n$-matrix and $H=H(t)$ is $(n-m)$-vector.
Indeed we need as many equations as outgoing Riemannian invariants. However it is not sufficient. We need also to assume that the following *non-degeneracy* assumption is fulfilled: $(n-m)\times (n-m)$-matrix $C'$ obtained from $C$ by selecting last $(n-m)$ columns (corresponding to outgoing Riemannian invariants) is non-degenerate.
###Compatibility condition
The solution is continuous if and only if *compatibility condition* $CG(0)=H(0)$. Think why. If this condition fails $U$ is discontinuous (has jumps) along characteristics going into $\Omega$ from the corner point $(0,0)$.
But even if solution is continuous it is not necessarily continuously differentiable (one needs more compatibility conditions for this); even more compatibility conditions for $U$ be twice continuously differentiable etc.
**Problem 1.**
a. Prove compatibility condition $CG(0)=H(0)$ for continuity of solution $U$;
b. Derive a compatibility condition for continuity of the first derivatives $U\_t,U\_x$ of the solution;
c. Derive a compatibility condition for continuity of the second derivatives $U\_{tt},U\_{tx}, U\_{xx}$ of the solution.
\end{problem}
###General case
What happens if $Q^{-1}E^{-1}BQ$ is not a diagonal matrix? More generally consider $E=E(x,t)$, $A=A(x,t)$, $B=B(x,t)$. Then assuming that $E^{-1}A$ is *smooth diagonalisable* (t.m. one can select $Q=Q(x,t)$ a smooth matrix; this is the case provided $\lambda\_1,\ldots,\lambda\_n$ are real and distinct at every point) we have
\begin{equation}
V\_t + \Lambda V + Q^{-1}(Q\_t+AQ\_x +CQ)V=Q^{-1}F;
\label{eq-2.8.5}
\end{equation}
so while main part of system broke into separate equations, they are entangled through lower order terms.
Then main conclusions are the same:
####IVP](id:sect-2.8.6.1)
For Cauchy problem consider point $P$ and a triangle $\Delta(P)$ formed by two characteristics: the leftmost and the rightmost going back in time and by initial line. This triangle (curvilinear if $\lambda_j$ are not constant) is the domain of dependence of $P$.