$\renewcommand{\Re}{\operatorname{Re}}$ $\renewcommand{\Im}{\operatorname{Im}}$ $\newcommand{\erf}{\operatorname{erf}}$ $\newcommand{\dag}{\dagger}$ $\newcommand{\const}{\mathrm{const}}$
The aim of this is to introduce and motivate partial differential equations (PDE). The section also places the scope of studies in APM346 within the vast universe of mathematics.
A partial differential equation (PDE) is an equation involving partial derivatives. This is not so informative so let’s break it down a bit.
An ordinary differential equation (ODE) is an equation for a function which depends on one independent variable which involves the independent variable, the function, and derivatives of the function: \begin{equation*} F( t, u(t), u’(t), u^{(2)}(t), u^{(3)}(t), \ldots, u^{(m)}(t)) = 0. \end{equation*} This is an example of an ODE of degree $m$ where $m$ is a highest order of the derivative in the equation. Solving an equation like this on an interval $t\in [0,T]$ would mean finding a functoin $t \mapsto u(t) \in \mathbb{R}$ with the property that $u$ and its derivatives intertwine in such a way that this equation is true for all values of $t \in [0,T]$. The problem can be enlarged by replacing the real-valued $u$ by a vector-valued one $\mathbf{u}(t)= (u_1 (t), u_2 (t), \dots, u_N (t))$. In this case we usually talk about system of ODEs.
Even in this situation, the challenge is to find functions depending upon exactly one variable which, together with their derivatives, satisfy the equation.
When you have function that depends upon several variables, you can differentiate with respect to either variable while holding the other variable constant. This spawns the idea of partial derivatives. As an example, consider a function depending upon two real variables taking values in the reals: \begin{equation*}u: \mathbb{R} ^n \to \mathbb{R}. \end{equation*} As $n=2$ we sometimes visualize a function like this by considering its graph viewed as a surface in $\mathbb{R}^3$ given by the collection of points \begin{equation*} { (x,y,z) \in {\mathbb{R}^3}: z = u(x,y) }. \end{equation*} We can calculate the derivative with respect to $x$ while holding $y$ fixed. This leads to $u_x$, also expressed as $\partial_x u$, $\frac{\partial u}{\partial x}$, and $\frac{\partial\ }{\partial x}$. Similary, we can hold $x$ fixed and differentiate with respect to $y$.
A partial differential equation is an equation for a function which depends on more than one independent variable which involves the independent variables, the function, and partial derivatives of the function: \begin{equation*} F(x,y, u(x,y), u_x (x,y), u_y (x,y), u_{xx} (x,y), u_{xy} (x,y), u_{yx} (x,y), u_{yy} (x,y)) = 0. \end{equation*} This is an example of a PDE of degree 2. Solving an equation like this would mean finding a function $(x,y) \to u(x,y)$ with the property that $u$ and is partial derivatives intertwine to satisfy the equation.
Similarly to ODE case this problem can be enlarged by replacing the real-valued $u$ by a vector-valued one $\mathbf{u}(t)= (u_1 (t), u_2 (t), \dots, u_N (t))$. In this case we usually talk about system of PDEs.
PDEs are often referred as Equations of Mathematical Physics (or Mathematical Physics but it is incorrect as Mathematical Physics is now a separate field of mathematics) because many of PDEs are coming from different domains of physics (acoustics, optics, elasticity, hydro and aerodynamics, electromagnetism, quantum mechanics, seismology etc).
However PDEs appear in other field of science as well (like quantum chemistry, chemical kinetics); some PDEs are coming from economics and financial mathematics, or computer science.
Many PDEs are originated in other fields of mathematics.
(Some are actually systems)
(The expression $\Delta$ is called the Laplacian and is defined as $\partial_{x}^2 + \partial_{y}^2+\partial_{z}^2$ on $\mathbb{R}^3$.)
Remark 1.