$\newcommand{\R}{\mathbb R }$ $\newcommand{\bfa}{\mathbf a}$ $\newcommand{\bfb}{\mathbf b}$ $\newcommand{\bfu}{\mathbf u}$
Most of this is a review of topics that should be familiar from linear algebra.
If $\bfa = (a_1,\ldots, a_n)$ and $\bfb = (b_1,\ldots, b_n)$ are vectors in $\R^n$, then the dot product $\bfa\cdot \bfb$ is defined by $$ \bfa\cdot \bfb = a_1 b_1 +\cdots + a_n b_n $$
We use the dot product to define the Euclidean norm of $\bfa$: $$ |\bfa| := (\bfa \cdot \bfa)^{1/2} = \sqrt{ a_1^2+\cdots + a_n^2}. $$ The Euclidean norm of $\bfa$ is interpreted as the length of $\bfa$.
The Euclidean norm of $\bfb - \bfa$ is interpreted as the distance between $\bfa$ and $\bfb$, or as the length of the vector $\bfb-\bfa$.
Important facts
Cauchy's Inequality (also called the Cauchy-Schwarz inequality): \[ |\bfa\cdot \bfb| \le |\bfa|\ |\bfb|. \]
The Triangle Inequality: \[ | \bfa + \bfb | \le |\bfa| + |\bfb|. \]
The cosine formula: \[ \bfa\cdot \bfb = |\bfa| |\bfb| \cos \theta. \] where $\theta$ is the angle between vectors $\bfa$ and $\bfb$. (Note that Cauchy's inequality can easily be deduced from this.)
In particular, $\bfa$ and $\bfb$ are orthogonal (the angle $\theta$ is $\pi/2$) iff $\bfa\cdot\bfb = 0$.
if ${\bf u}$ is a unit vector (that is, if $|\bfu|=1$) and $\bf v$ is any vector, then $\bf( u\cdot v) u$ is the projection of $\bf v$ onto the line generated by $\bf u$.
This means that if we write $\bf v = v_1+v_2$, where $\bf v_1$ is parallel to $\bf u$ and $\bf v_2$ is orthogonal to $\bf u$, then $\bf v_1= (u\cdot v)u$.
Equivalently, if we form a right triangle whose hypotenuse is $\bf v$ and whose base lies along the line generated by $\bf u$ then the base is given by $\bf (u\cdot v)u$.
In $n$-dimensional Euclidean space we often write $\bf e_j$ to denote the unit vector in the $j$th coordinate direction. For example, \[ {\bf e}_1 = (1,0,\ldots, 0), \quad {\bf e}_2 = (0,1,\ldots, 0), \quad \ldots\quad {\bf e}_n = (0 ,0,\ldots, 1). \]
When $n=3$ we often write ${\bf i}, {\bf j}, {\bf k}$ instead of ${\bf e}_1,{\bf e}_2, {\bf e}_3$, so that \[ {\bf i} = (1,0,0) , \qquad {\bf j} = (0,1,0) , \qquad {\bf k} = (0,0,1) . \]
If we are discussing a vector $\bfa$ in $\R^n$, then our default convention is that $a_j$ denotes the $j$th component, that is, $a_j = \bfa \cdot {\bf e}_j$.
In some situations, it is important to distinguish between points and vectors, since these may have very different mathematical or physical interpretations. From this perspective,
a vector normally represents something such as a force or velocity, that is characterized a direction and a magnitude. A vector is often pictured as an arrow.
a point normally represents a position. It is often pictured as, well, a point.
But in many situations, the distinction between points and vectors can be ignored with no harm.
This is normally the case in MAT237. Thus, for us, vector
will mean either a vector (narrowly understood) or a point.
We will however tend to use different letters to indicate what we are thinking of. We will normally write \[ {\bf v} = (v_1, \ldots, v_n), \qquad {\bf u} = (u_1, \ldots, u_n) \] when we have in mind a direction-and-magnitude, whereas we will typically write \[ {\bf x} = (x_1, \ldots, x_n), \qquad {\bf y} = (y_1, \ldots, y_n), \qquad {\bf z} = (z_1, \ldots, z_n) \] when we are thinking of a position. We will also often write \[ {\bf x} = (x,y) \ \mbox{ for a point in }\R^2,\qquad {\bf x} = (x,y,z) \ \mbox{ for a point in }\R^3. \] Thus, a boldface $\bf x$ denotes an element of $\R^n$, whereas a non-boldface $x$ denotes a component of a vector. (Different lecturers may have different ways to convey the distinction between $\bf x$ and $x$ on the blackboard. The precise choice does not matter as long as it is reasonable and followed consistently.)
The vectors discussed above look like row vectors, as we have written them. However, our our default rule for this class is:
Thus, if we write ${\bf x} = (x,y,z)$ for example, it should be viewed as a convenient abbreviation for the column vector $(x,y,z)^T = \left( \begin{array}{c} x \\ y \\ z \end{array}\right)$. Of course, we will sometimes not use this abbreviation and write vectors as columns.
There is a good reason for this rule. For example, we
may want to define linear functions, like
\begin{equation}\label{0.1.linear}
{\bf f}:\R^n\to \R^m, \qquad {\bf f}({\bf x}) = A {\bf x}
\end{equation}
where $A$ is a $m\times n$ matrix.
This is correct if $\bf x$ and ${\bf f}({\bf x})$ are column vectors. If they were row vectors then we would have to write ${\bf f}({\bf x}) = {\bf x} A$, which looks backward
. More complicated expressions (for example, involving compositions of functions) look even more backward
if everything is written in terms of row vectors, but look fine if we use column vectors.
When we encounter row vectors, they will often arise as $m\times n$ matrices in the special case when $m=1$. For example, if I want to write a linear function ${\bf f}:\R^n\to \R$ in the general form \eqref{0.1.linear}, then $A$ would have to be a $1\times n$ matrix, in other words a row vector.
A subspace of Euclidean space is a set $V$ such that if $\bfa, \bfb \in V$ then $c_1\bfa + c_2\bfb \in V$ for all real numbers $c_1,c_2$. Note that any (nonempty) subspace must contain the origin, since we can always choose $c_1=c_2=0$.
Supose that $A$ is a $m\times n$ matrix.
If the $n$ columns of $A$ are linearly independent (which can only happen if $n \le m$), then $$ \{ A {\bf x} : {\bf x}\in \R^n \} \mbox{ is a $n$-dimensional subspace of }\R^m. $$ In fact it is the subspace consisting of all linear combinations of columns of $A$. When $m=3$ and $n=2$, for example, one can understand geometrically, why this space must be a $2$-dimensional plane through the origin in $\R^3$. The principle is the same in the general case.
If the $m$ rows of $A$ are linearly independent (which can only happen if $m\le n$), then
$$
\{ {\bf x}\in \R^n : A {\bf x}= {\bf 0} \} \mbox{ is a $n-m$-dimensional subspace of }\R^n.
$$
It is the subspace of vectors that are orthogonal to all the rows of $A$.
When $m=2$ and $n=3$, for example, one can see
, with the mind's eye, why this space must be a $1$-diemsnional line through the origin in $\R^3$. The principle is the same in the general case.
In $3$-dimensional Euclidean space only, there is a second and very different way of multiplying two vectors. The cross product of vectors $\bfa$ and $\bfb$ is the vector denoted $\bfa\times \bfb$, defined as \[ \bfa\times \bfb = (a_2b_3 - a_3b_2, a_3b_1 - a_1b_3,a_1b_2 - a_2b_1) \] We emphasize that the cross product of two vectors is a vector, whereas the dot product of two vectors is a scalar.
One can check by elemenary computations that $\bfa\times \bfb$ has the following properties:
It is orthogonal to both $\bfa$ and $\bfb$.
You can directly check this by computing $\bfa \cdot (\bfa \times \bfb)$
for example.
$|\bfa\times\bfb| = |\bfa| |\bfb| \sin \theta$. This says that the length $\bfa\times\bfb$ equals the area of the paralellogram generated by $\bfa$ and $\bfb$. If you want to prove this, you can check that $$ |\bfa\times\bfb|^2 = |\bfa|^2 |\bfb|^2 - (\bfa\cdot \bfb)^2 = |\bfa|^2|\bfb|^2(1-\cos^2\theta) = |\bfa|^2 |\bfb|^2\sin^2\theta. $$ Verifying this requires careful computations but it is not conceptually difficult.
Some useful algebraic properties of the cross product are:
The cross-product is not associative; it is easy to find examples of vectors $\bfa, \bfb, {\bf c}$ such that $$ (\bfa\times \bfb)\times {\bf c} \ne \bfa\times (\bfb \times {\bf c}) $$ For example, $({\bf i}\times {\bf i})\times {\bf j} \ne {\bf i}\times ( {\bf i}\times {\bf j})$.