
We introduce partial derivatives and the gradient vector.

Given a function $F:\R ^n \to \R$, it is often useful to differentiate with respect to a single variable and hold the other variables as constants. One way to think of a function of several variables is as a “machine” with lots of knobs: One way to try and understand the machine above would be to hold all but one of the knobs constant, and see what happens when you “wiggle” a single knob. As a explicit example, let Here $F$ is our “machine” and the variables $x$ and $y$ are the “knobs.” Fixing $y=2$, allows us to focus our attention to all points on the surface where the $y$-value is $2$, We can now focus our attention on the curve and differentiate this curve purely with respect to $x$. In a similar way, we could fix $x$ and differentiate with respect to $y$.

The following interactive let’s you see whats going on with partial derivatives:

Let $F(x,y) = x^2+2y^2$. Compute:
Compute
Let $F(x,y) = x^2y + 2x+y^3$. Compute:
Compute:

We have shown how to compute a partial derivative, but it may still not be clear what a partial derivative means. Given $z=F(x,y)$, $F^{(1,0)}(x,y)$ measures the rate at which $z$ changes as only $x$ varies: $y$ is held constant.

Imagine standing in a rolling meadow, then beginning to walk due east. Depending on your location, you might walk up, sharply down, or perhaps not change elevation at all. This is similar to measuring $\pp [z]{x}$: you are moving only east (in the $x$-direction) and not north/south at all. Going back to your original location, imagine now walking due north (in the $y$-direction). Perhaps walking due north does not change your elevation at all. This is analogous to $\pp [z]{y}=0$: $z$ does not change with respect to $y$. We can see that $\pp [z]{x}$ and $\pp [z]{y}$ do not have to be the same, or even similar, as it is easy to imagine circumstances where walking east means you walk downhill, though walking north makes you walk uphill. The next example helps us visualize this.

Whenever we do a computation in mathematics, we should ask ourselves, “What does this mean?”

Estimating partial derivatives

Functions of several variables, especially ones that map $\R ^2\to \R$ can be described by a table of values or level curves. In either case we can estimate partial derivatives by looking at Let’s do an example to make this more clear.

Let $F:\R ^2\to \R$ be a differentiable function described by the following table of values: Estimate $F^{(0,1)}(2,6)$.
Work as we did in the example above, finding two estimates and taking their averages.

We can also estimate partial derivatives by looking at level curves.

Let $F:\R ^2\to \R$ be described by the level curves below: The height of the level curve is marked on the curve, and we are given a point $(4,2)$. Estimate $F^{(0,1)}(4,2)$.
Work as we did in the example above, finding two estimates and taking their averages.

Combining partial derivatives

While a function $f:\R \to \R$ only has one second derivative. However, functions $F:\R ^2\to \R$ have $4$ second partial derivatives and functions $F:\R ^3\to \R$ have $9$ second partial derivatives! Don’t run off yet, things get better.

Consider: Find six first and second partial derivatives.
Notice how above $\frac {\partial ^2F}{\partial y\partial x}=\frac {\partial ^2F}{\partial x\partial y}$. The next theorem states that it is not a coincidence.

Finding $\frac {\partial ^2F}{\partial y\partial x}$ and $\frac {\partial ^2F}{\partial x\partial y}$ independently and comparing the results provides a convenient way of checking our work.

Given a function $F:\R ^n\to \R$, we often want to work with all of first partial derivatives simultaneously. In this case, we will work with the vector: As we will see, for functions of several variables, this vector will play the role that the derivative did for functions of a single variable. This vector is called the gradient vector.

The upside-down triangle in the notation for the gradient sometimes called a del. It is also known as a nabla. You can think of the $\grad$ as the vector: and hence when one writes: $\grad F$, you are literally distributing the $F$ across the vector, just as a scalar acts on a vector.

Try your hand at some casual computations.

Let $F(x,y) = \sin (x)\cos (y)$, compute:
Above, note that $\grad F(x,y)$ is a vector whose components are functions of $x$ and $y$, hence it is a vector-valued function. We can evaluate functions at actual points in their domain. For instance, if $\vec {p}= \vector {\pi /3,\pi /3}$ we compute:

And now in three variables.

Let $F(x,y,z) = ze^{-7xy}$, compute:
Let $\vec {p}= \vector {1,0,1/7}$. Compute:

This is just your first taste of the gradient vector. Much more will be coming soon.