0.2. The geometry of Euclidean space

$\newcommand{\R}{\mathbb R }$ $\newcommand{\bfa}{\mathbf a}$ $\newcommand{\bfb}{\mathbf b}$ $\newcommand{\bfu}{\mathbf u}$

The geometry of Eucidean space

Most of this is a review of topics that should be familiar from linear algebra.

  1. Dot product, euclidean norm, orthogonality
  2. Some common notation
  3. Points vs. vectors
  4. Row vectors vs. columnm vectors
  5. Subspaces of Euclidean space
  6. Cross product

Dot product, euclidean norm, orthogonality

If $\bfa = (a_1,\ldots, a_n)$ and $\bfb = (b_1,\ldots, b_n)$ are vectors in $\R^n$, then the dot product $\bfa\cdot \bfb$ is defined by $$ \bfa\cdot \bfb = a_1 b_1 +\cdots + a_n b_n $$

We use the dot product to define the Euclidean norm of $\bfa$: $$ |\bfa| := (\bfa \cdot \bfa)^{1/2} = \sqrt{ a_1^2+\cdots + a_n^2}. $$ The Euclidean norm of $\bfa$ is interpreted as the length of $\bfa$.

The Euclidean norm of $\bfb - \bfa$ is interpreted as the distance between $\bfa$ and $\bfb$, or as the length of the vector $\bfb-\bfa$.

Important facts

  1. Cauchy's Inequality (also called the Cauchy-Schwarz inequality): \[ |\bfa\cdot \bfb| \le |\bfa|\ |\bfb|. \]

  2. The Triangle Inequality: \[ | \bfa + \bfb | \le |\bfa| + |\bfb|. \]

  3. The cosine formula: \[ \bfa\cdot \bfb = |\bfa| |\bfb| \cos \theta. \] where $\theta$ is the angle between vectors $\bfa$ and $\bfb$. (Note that Cauchy's inequality can easily be deduced from this.)

  4. In particular, $\bfa$ and $\bfb$ are orthogonal (the angle $\theta$ is $\pi/2$) iff $\bfa\cdot\bfb = 0$.

  5. if ${\bf u}$ is a unit vector (that is, if $|\bfu|=1$) and $\bf v$ is any vector, then $\bf( u\cdot v) u$ is the projection of $\bf v$ onto the line generated by $\bf u$.

Some common notation

In $n$-dimensional Euclidean space we often write $\bf e_j$ to denote the unit vector in the $j$th coordinate direction. For example, \[ {\bf e}_1 = (1,0,\ldots, 0), \quad {\bf e}_2 = (0,1,\ldots, 0), \quad \ldots\quad {\bf e}_n = (0 ,0,\ldots, 1). \]

When $n=3$ we often write ${\bf i}, {\bf j}, {\bf k}$ instead of ${\bf e}_1,{\bf e}_2, {\bf e}_3$, so that \[ {\bf i} = (1,0,0) , \qquad {\bf j} = (0,1,0) , \qquad {\bf k} = (0,0,1) . \]

If we are discussing a vector $\bfa$ in $\R^n$, then our default convention is that $a_j$ denotes the $j$th component, that is, $a_j = \bfa \cdot {\bf e}_j$.

Points vs. vectors

In some situations, it is important to distinguish between points and vectors, since these may have very different mathematical or physical interpretations. From this perspective,

But in many situations, the distinction between points and vectors can be ignored with no harm. This is normally the case in MAT237. Thus, for us, vector will mean either a vector (narrowly understood) or a point.

We will however tend to use different letters to indicate what we are thinking of. We will normally write \[ {\bf v} = (v_1, \ldots, v_n), \qquad {\bf u} = (u_1, \ldots, u_n) \] when we have in mind a direction-and-magnitude, whereas we will typically write \[ {\bf x} = (x_1, \ldots, x_n), \qquad {\bf y} = (y_1, \ldots, y_n), \qquad {\bf z} = (z_1, \ldots, z_n) \] when we are thinking of a position. We will also often write \[ {\bf x} = (x,y) \ \mbox{ for a point in }\R^2,\qquad {\bf x} = (x,y,z) \ \mbox{ for a point in }\R^3. \] Thus, a boldface $\bf x$ denotes an element of $\R^n$, whereas a non-boldface $x$ denotes a component of a vector. (Different lecturers may have different ways to convey the distinction between $\bf x$ and $x$ on the blackboard. The precise choice does not matter as long as it is reasonable and followed consistently.)

Row vectors vs. column vectors

The vectors discussed above look like row vectors, as we have written them. However, our our default rule for this class is:

Thus, if we write ${\bf x} = (x,y,z)$ for example, it should be viewed as a convenient abbreviation for the column vector $(x,y,z)^T = \left( \begin{array}{c} x \\ y \\ z \end{array}\right)$. Of course, we will sometimes not use this abbreviation and write vectors as columns.

There is a good reason for this rule. For example, we may want to define linear functions, like \begin{equation}\label{0.1.linear} {\bf f}:\R^n\to \R^m, \qquad {\bf f}({\bf x}) = A {\bf x} \end{equation} where $A$ is a $m\times n$ matrix. This is correct if $\bf x$ and ${\bf f}({\bf x})$ are column vectors. If they were row vectors then we would have to write ${\bf f}({\bf x}) = {\bf x} A$, which looks backward. More complicated expressions (for example, involving compositions of functions) look even more backward if everything is written in terms of row vectors, but look fine if we use column vectors.

When we encounter row vectors, they will often arise as $m\times n$ matrices in the special case when $m=1$. For example, if I want to write a linear function ${\bf f}:\R^n\to \R$ in the general form \eqref{0.1.linear}, then $A$ would have to be a $1\times n$ matrix, in other words a row vector.

Subspaces of Euclidean space.

A subspace of Euclidean space is a set $V$ such that if $\bfa, \bfb \in V$ then $c_1\bfa + c_2\bfb \in V$ for all real numbers $c_1,c_2$. Note that any (nonempty) subspace must contain the origin, since we can always choose $c_1=c_2=0$.

Supose that $A$ is a $m\times n$ matrix.

Cross product

In $3$-dimensional Euclidean space only, there is a second and very different way of multiplying two vectors. The cross product of vectors $\bfa$ and $\bfb$ is the vector denoted $\bfa\times \bfb$, defined as \[ \bfa\times \bfb = (a_2b_3 - a_3b_2, a_3b_1 - a_1b_3,a_1b_2 - a_2b_1) \] We emphasize that the cross product of two vectors is a vector, whereas the dot product of two vectors is a scalar.

One can check by elemenary computations that $\bfa\times \bfb$ has the following properties:

  1. It is orthogonal to both $\bfa$ and $\bfb$.
    You can directly check this by computing $\bfa \cdot (\bfa \times \bfb)$ for example.

  2. $|\bfa\times\bfb| = |\bfa| |\bfb| \sin \theta$. This says that the length $\bfa\times\bfb$ equals the area of the paralellogram generated by $\bfa$ and $\bfb$. If you want to prove this, you can check that $$ |\bfa\times\bfb|^2 = |\bfa|^2 |\bfb|^2 - (\bfa\cdot \bfb)^2 = |\bfa|^2|\bfb|^2(1-\cos^2\theta) = |\bfa|^2 |\bfb|^2\sin^2\theta. $$ Verifying this requires careful computations but it is not conceptually difficult.

Some useful algebraic properties of the cross product are:

The cross-product is not associative; it is easy to find examples of vectors $\bfa, \bfb, {\bf c}$ such that $$ (\bfa\times \bfb)\times {\bf c} \ne \bfa\times (\bfb \times {\bf c}) $$ For example, $({\bf i}\times {\bf i})\times {\bf j} \ne {\bf i}\times ( {\bf i}\times {\bf j})$.

$\Leftarrow$  $\Uparrow$  $\Rightarrow$