Table of contents

Analysis of two-dimensional systems

1. Sketch the vector field via nullclines

Nullclines is a technique for quick understanding of a 2-dimensional vector field. Given a vector field
\bfF(x, y) = \bmat{f(x, y) \\ g(x, y)},
the x-nullclines are the curves defined by the equations
f(x, y) = 0,
and y-nulclines are the curves defined by
g(x, y) = 0.
It follows that:
Quite often, this is already enough to sketch the phase portrait.
Example 1.1.
\begin{aligned} \dot{x} & = - x \\ \dot{y} & = - 2y + 2 x^3 \end{aligned}
Example 1.2.
\begin{aligned} \dot{x} & = - x + xy \\ \dot{y}& = xy - 1 \end{aligned}

2. Local analysis of fixed points

Definition 2.1.
Let (x_0, y_0) be an equilibrium of \bfF(x, y) = \bmat{f(x, y) \\ g(x, y)}. The linearized system at (x_0, y_0) is the linear system
\bmat{\dot{u} \\ \dot{v}} = A \bmat{u \\ v}, \quad \text{ where } A = \bmat{\partial_x f & \partial_y f \\ \partial_x g & \partial_y g}(x_0, y_0).\qquad (2.1)
The matrix A can also be written as D\bfF(x_0, y_0) which is the Jacobi matrix of the vector field \bfF at (x_0, y_0).
Proposition 2.2.
Suppose \bfF is C^1, (x_0, y_0) is an equilibrium, and A = D\bfF(x_0, y_0). After the coordinate change u = x - x_0, v = y - y_0, we have
\begin{aligned} \bmat{\dot{u} \\ \dot{v}} = A \bmat{u \\ v} + \bmat{R_1(u, v) \\ R_2(u, v)} \end{aligned}\qquad (2.2)
where
\lim_{|(u, v)| \to 0} \frac{R_j(u, v)}{|(u, v)|} = 0, \quad j = 1, 2.
The matrix A plays the role of f'(x_0) for 1d systems.
Question 2.3.
Do the phase portraits of (2.1) and (2.2) look similar?

2.1. The saddle

Example 2.4.
\bmat{\dot{x} \\ \dot{y}} = \bmat{ x + y^3 \\ - y + x^3} \quad \text{ vs } \quad \bmat{\dot{x} \\ \dot{y}} = \bmat{x \\ - y}.
Definition 2.5.
We call (x_0, y_0) a saddle fixed point if the linearized system at (x_0, y_0) is a saddle.
Definition 2.6.
Let \bfF(\bx) be a vector field on \R^n and \bx_0 is an equilibrium.
  • The stable manifoldW^s(\bx_0) of \bx_0 is the set of all initial conditions \by_0 so that \phi(t; \by_0) \to \bx_0 as t \to \infty.
  • The unstable manifoldW^u(\bx_0) of \bx_0 is the set of all initial conditions \by_0 so that \phi(t; \by_0) \to \bx_0 as t \to -\infty.
Remark 2.7.
The word manifold refers to smooth curve, surfaces, or hypersurfaces. This includes a "surface" that is the same dimension of the ambient space, for example, an open set in \R^2 is also considered a manifold.
Theorem 2.8. (The stable/unstable manifold theorem in dimension two).
Let \bfF(x, y) be a vector field in \R^2 and (x_0, y_0) be an equilibrium. Suppose A = D \bfF(x_0, y_0) admits eigenvalues \lambda_1 < 0 < \lambda_2 with eigenvectors \bv_1 and \bv_2. Then:
  • Near (x_0, y_0), the stable manifold of (x_0, y_0) is a smooth curve through (x_0, y_0) tangent to \bv_1.
  • Near (x_0, y_0), the unstable manifold of (x_0, y_0) is a smooth curve through (x_0, y_0) tangent to \bv_2.
Theorem 2.9. (Grobman-Hartman).
If (x_0, y_0) is a saddle fixed point, then there is a continuous mapping from a neighborhood of (x_0, y_0) to a neighbhorhood of (0, 0), mapping the phase portrait of \dot{\bx} = \bF(\bx) to the phase portrait of the linearized system.
In other words, the phase portrait of a nonlinear system near a saddle fixed point looks like the phase portrait of its linearized system.

2.2. Attracting fixed points

Example 2.10.
\bmat{\dot{x} \\ \dot{y}} = \bmat{ - x + y^3 \\ - 2 y + x^3} \quad \text{ vs } \quad \bmat{\dot{x} \\ \dot{y}} = \bmat{- x \\ - 2 y}.
Definition 2.11.
A fixed point (x_0, y_0) is called attracting if there is r > 0 such that any (x, y) satisfying |(x - x_0, y - y_0)| < r satisfies
\phi(t; (x, y)) \to (x_0, y_0), \quad t \to \infty.
A fixed point (x_0, y_0) is called repelling if it's attracting in backward time.
We have the following:
Theorem 2.12.
Suppose (x_0, y_0) is an equilibrium of \bfF(x, y), and the linearized system has two negative eigenvalues. Then (x_0, y_0) is attracting.
We will actually give a proof of the following simpler version.
Theorem 2.13.
For the system
\bmat{\dot{u} \\ \dot{v}} = \bmat{\lambda_1 u + R_1(u, v) \\ \lambda_2 v + R_2(u, v)},\qquad (2.3)
where \lambda_1 \le \lambda_2 < 0 and
\lim_{|(u, v)| \to 0} \frac{R_j(u, v)}{|(u, v)|} = 0, \quad j = 1, 2,
we have (0, 0) is an attracting fixed point.
To prove this theorem, we need the following (very useful lemma).
Lemma 2.14.
Suppose D(t) \ge 0 and
\dot{D}(t) \le - a D(t)
for some a > 0, then
D(t) \le e^{-at} D(0).
In particular \lim_{t \to \infty} D(t) = 0.
Proof.
Since
\dot{D}(t) + aD(t) \le 0,
we multiply both sides by e^{at} to get
\frac{d}{dt}\left( e^{at} D(t)\right) \le 0,
therefore
e^{at} D(t) - D(0) \le 0, \quad \text{ or } \quad D(0) \le e^{-at}D(t).
Proof of Theorem 2.13.
Using
\lim_{|(u, v)| \to 0} \frac{R_j(u, v)}{|(u, v)|} = 0, \quad j = 1, 2,
for any \epsilon > 0, we can finde r > 0 such that whenever |(u, v)| < r, we have
|R_1|, |R_2| < \epsilon |(u, v)| = \epsilon \sqrt{u^2 + v^2}.
We now have
\begin{aligned}& \frac{d}{dt}\left( u^2(t) + v^2(t)\right) = 2 u \dot{u} + 2v \dot{v} = 2 u(\lambda_1 u + R_1) + 2v(\lambda_2 v + R_2) \\ & \quad = 2\lambda_1 u^2 + 2 \lambda_2 v^2 + 2u R_1 + 2v R_2 \le 2\lambda_2(u^2 + v^2) + 2\epsilon |(u, v)| (|u| + |v|) \\ & \quad \le 2 \lambda_2 (u^2 + v^2) + 4\epsilon (u^2 + v^2) = (2\lambda_2 + 4\epsilon) (u^2 + v^2). \end{aligned}
Set D(t) = u^2(t) + v^2(t), and a = 2\lambda_2 + 4\epsilon. If \epsilon is chosen to be small enough, we have a < 0. By Lemma 2.14,
\lim_{t \to \infty} D(t) = 0.
This proves (0, 0) is attracting.
Example 2.15.
\bmat{\dot{x} \\ \dot{y}} = \bmat{- x + y - x(x^2 + y^2) \\ - x - y - y(x^2 + y^2)} \quad \text{ vs } \quad \bmat{\dot{x} \\ \dot{y}} = \bmat{x + y \\ - x + y}.
Theorem 2.16.
Suppose the linearized system of (x_0, y_0) has eigenvalues \alpha \pm \beta i with \alpha < 0, then (x_0, y_0) is an attracting fixed point.
Exercise 2.17. (*).
Prove Theorem 2.16 in the special case
D\bfF(x_0, y_0) = \bmat{\alpha & \beta \\ -\beta & \alpha},
where \alpha < 0. You should emulate the proof of Theorem 2.13
Theorem 2.18.
If the linearized system of (x_0, y_0) is either an attracting node or attracting focus, then the phase portrait of the nonlinear system near (x_0, y_0) is similar to the phase portrait of the linearized system.
Remark 2.19.
If the linearized system is either an attracting star or a attracting degenerate node, then the nonlinear fixed point is still attracting, but the phase portrait may look different from the star or the degenerate node.

2.3. Degenerate cases

Example 2.20.
\bmat{\dot{x} \\ \dot{y}} = \bmat{y - x(x^2 + y^2) \\ -x - y(x^2 + y^2)} \quad \text{ vs } \quad \bmat{\dot{x} \\ \dot{y}} = \bmat{y \\ - x}.
When the linearized system is an elliptic centre, the fixed point of the nonlinear system may not be a centre. A general rule of thumb is that features of degenerate linear part does not carry over to the nonlinear system.

2.4. Linear analysis of higher dimensional systems

Both the characterization of the attracting / repelling fixed point and the Grobman-Hartman theorem can be generalized to higher dimensions.
Theorem 2.21.
Let \bx_0 be an equilibrium for the equation \dot{\bx} = \bfF(\bx) with \bx \in \R^n, and assume that all the eigenvalues the Jacobi matrix
D\bfF(\bx_0)
have negative real part. Then \bx_0 is an attracting fixd point.
If all eigenvalues have positive real part, then \bx_0 is an repelling fixed point.
Theorem 2.22. (Stable/unstable manifold and Grobman-Hartman in \R^n).
Let \bx_0 be an equilibrium for the equation \dot{\bx} = \bfF(\bx) with \bx \in \R^n, and assume that all the eigenvalues the Jacobi matrix admits eigenvalues \lambda_1, \cdots, \lambda_n \in \C with eigenvectors \bx_1, \cdots, \bv_n. Suppose in addition that
\Re \lambda_1, \cdots, \Re \lambda_k < 0, \quad \Re \lambda_{k+1}, \cdots, \Re \lambda_n > 0.

3. Using linear analysis to study systems

3.1. Examples and applications

Example 3.1.
\dot{x} = xy - 1, \quad \dot{y} = x - y^3.
Example 3.2.
The system
\begin{aligned} \dot{x} & = x(K - x - ay) \\ \dot{y} & = y(L - bx - y) \end{aligned}
models two competitive populations. The value K - x - ay is the growth rate of species x, which has a logistic component K - x (growth rate limited by resources) and a competitive component - ay (growth rate decreased by competitor). The same model is applied to y.
Example 3.3. (Chemostat).
A chemostat is a device for growing microorganisms. A constant volume of nutrient will be pumped into the system with the same volume being removed.
S: nutrient in the environment, x: concentration of microorganism. The growth rate of the population given S is
r(S) = \frac{m S}{a + S}.
This means: the rate stabilzes at \frac{m}{a} if S \to \infty (no infinity growth rate even with infinite food), and the rate goes to 0 at speed \frac{1}{a} as S \to 0. Since a constant portion of the population is being removed by the removal pump, we get
\dot{x} = r(S) x - D x.
For the nutrient, a constant amount porportional to D is being pumped in, let's say CD. The system also remove the nutrient at current concentration, so the amount removed is SD. Finally, an amount porportional to the growth of the population is consumed, say \beta r(S) x. This leads to the equation
\dot{S} = (C - S)D - \beta r(S) x.

3.2. Bifurcation of two-dimensional systems

Example 3.4. (Saddle-node bifurcation).
\begin{aligned} \dot{x} & = \mu - y - x^2 \\ \dot{y} & = - y, \end{aligned}
where \mu \in \R is a parameter.
Example 3.5. (Pitchfork bifurcation).
\begin{aligned} \dot{x} & = \mu x - y - x^2 \\ \dot{y} & = - y \end{aligned}
where \mu \in \R is a parameter.