$\newenvironment {prompt}{}{} \newcommand {\ungraded }{} \newcommand {\todo }{} \newcommand {\oiint }{{\large \bigcirc }\kern -1.56em\iint } \newcommand {\mooculus }{\textsf {\textbf {MOOC}\textnormal {\textsf {ULUS}}}} \newcommand {\npnoround }{\nprounddigits {-1}} \newcommand {\npnoroundexp }{\nproundexpdigits {-1}} \newcommand {\npunitcommand }{\ensuremath {\mathrm {#1}}} \newcommand {\RR }{\mathbb R} \newcommand {\R }{\mathbb R} \newcommand {\N }{\mathbb N} \newcommand {\Z }{\mathbb Z} \newcommand {\sagemath }{\textsf {SageMath}} \newcommand {\d }{\mathop {}\!d} \newcommand {\l }{\ell } \newcommand {\ddx }{\frac {d}{\d x}} \newcommand {\zeroOverZero }{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} \newcommand {\inftyOverInfty }{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} \newcommand {\zeroOverInfty }{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} \newcommand {\zeroTimesInfty }{\ensuremath {\small \boldsymbol {0\cdot \infty }}} \newcommand {\inftyMinusInfty }{\ensuremath {\small \boldsymbol {\infty -\infty }}} \newcommand {\oneToInfty }{\ensuremath {\boldsymbol {1^\infty }}} \newcommand {\zeroToZero }{\ensuremath {\boldsymbol {0^0}}} \newcommand {\inftyToZero }{\ensuremath {\boldsymbol {\infty ^0}}} \newcommand {\numOverZero }{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} \newcommand {\dfn }{\textbf } \newcommand {\unit }{\mathop {}\!\mathrm } \newcommand {\eval }{\bigg [ #1 \bigg ]} \newcommand {\seq }{\left ( #1 \right )} \newcommand {\epsilon }{\varepsilon } \newcommand {\phi }{\varphi } \newcommand {\iff }{\Leftrightarrow } \DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \DeclareMathOperator {\sign }{sign} \newcommand {\arrowvec }{{\overset {\rightharpoonup }{#1}}} \newcommand {\vec }{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} \newcommand {\point }{\left (#1\right )} \newcommand {\pt }{\mathbf {#1}} \newcommand {\Lim }{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} \newcommand {\veci }{{\boldsymbol {\hat {\imath }}}} \newcommand {\vecj }{{\boldsymbol {\hat {\jmath }}}} \newcommand {\veck }{{\boldsymbol {\hat {k}}}} \newcommand {\vecl }{\vec {\boldsymbol {\l }}} \newcommand {\uvec }{\mathbf {\hat {#1}}} \newcommand {\utan }{\mathbf {\hat {t}}} \newcommand {\unormal }{\mathbf {\hat {n}}} \newcommand {\ubinormal }{\mathbf {\hat {b}}} \newcommand {\dotp }{\bullet } \newcommand {\cross }{\boldsymbol \times } \newcommand {\grad }{\boldsymbol \nabla } \newcommand {\divergence }{\grad \dotp } \newcommand {\curl }{\grad \cross } \newcommand {\lto }{\mathop {\longrightarrow \,}\limits } \newcommand {\bar }{\overline } \newcommand {\surfaceColor }{violet} \newcommand {\surfaceColorTwo }{redyellow} \newcommand {\sliceColor }{greenyellow} \newcommand {\vector }{\left \langle #1\right \rangle } \newcommand {\sectionOutcomes }{} \newcommand {\HyperFirstAtBeginDocument }{\AtBeginDocument }$

We use the gradient to approximate values for functions of several variables.

We’ve studied differentials in our previous courses: If $f$ is differentiable, then: Here $\d f$ and $\d x$ are two new variables that have been “cooked-up” to ensure that It is worthwhile to compare and contrast The values are the change $x$ and change in $f$ when $x$ and $f$ are related. On the other hand, if $f$ is a function of $x$, but $\d f$ is not necessarily equal to $\Delta f$. Instead $\d f$ is the value that satisfies this equation: When $\d x$ is small, $\d f\approx \Delta f$, the change in $f$ resulting from the change in $x$. The key idea is that as $\d x\to 0$ Another way of stating this is: as $\d x$ goes to $0$, the error in approximating $\Delta f$ with $\d f$ goes to $0$.

Let’s extend this idea to functions of two variables. Consider $F(x,y)$, and let $\Delta x = \d x$ and $\Delta y=\d y$ represent changes in $x$ and $y$, respectively. Now is the change in $F$ over the change in $x$ and $y$. Recalling that $F^{(1,0)}$ and $F^{(0,1)}$ give the instantaneous rates of change of $F$ in the $x$ and $y$-directions respectively, we can approximate $\d F$ as In words, this says:

The total change in $F$ is approximately the change caused by changing $x$ plus the change caused by changing $y$.

Setting $\vec {x}=\vector {x,y}$, we can rewrite this in terms of the dot product: This leads us to our next definition.

Let $F(x,y) = x^4e^{3y}$. Find $\d F$.

We can approximate $\Delta F$ with $\d F$, but as with all approximations, there is error involved. Approximating $\Delta F$ with $\d F$ relies on the fact that if a function is differentiable, then we can “zoom-in” on our surface until at some point, the surface looks like a plane. Of course as we’ve learned, this property is called differentiability, meaning quite literally can we use differentials to describe this surface.

If we believe that discrete data has been gathered from a function that is differentiable, it makes sense to estimate values of the function using differentials.

### Approximating with the total differential

Suppose you want to approximate $F(x,y)=\sqrt {x}\sin (y)$ at the point $(4.1,0.8)$. Without knowledge of calculus, your approximation might go like this:

We try to find numbers near $4.1$ and $0.8$ where $F(x,y)=\sqrt {x}\sin (y)$ is easy to evaluate. For example, we know that $\sqrt {4}= 2$, so instead of looking at $x=4.1$, we’ll use $x=4$. Also, we know $\sin (\pi /4)= \sqrt {2}/2$ since $\pi /4\approx 0.8$, we’ll use it in our approximation. Hence

Without calculus, (or some other insight) this is the best approximation we could reasonably come up with. The total differential gives us a way of adjusting this initial approximation to hopefully get a more accurate answer.

The point of the previous example was not to develop an approximation method for known functions. After all, we can very easily compute $F(4.1,0.8)$ using readily available technology. Rather, it serves to illustrate how well this method of approximation works, and to reinforce the following concept:

New position = old position $+$ amount of change,
New position $\approx$ old position + approximate amount of change.

In the previous example, we could easily compute $F(4,\pi /4)$ and could approximate the change in $F$ when computing $F(4.1,0.8)$, letting us approximate the new value for $F$.

It may be surprising to learn that it is not uncommon to know the values of $F$ and $\grad F$ at a particular point without actually knowing a formula for $F$. The total differential gives a good method of approximating $F$ by looking at nearby points.

### Error analysis

The total differential gives an approximation of the change in $F$ given small changes in $x$ and $y$. We can use this to approximate error propagation; that is, if the input is a little off from what it should be, how far from correct will the output be? We demonstrate this in an example.

The previous example showed that the volume of a particular tank was more sensitive to changes in radius than in height. Keep in mind that this analysis only applies to a tank of the dimensions given in the problem. A tank with a height of $1$ft and radius of $5$ft would be more sensitive to changes in height than in radius.

One could make a chart of small changes in radius and height and find exact changes in volume given specific changes. While this provides exact numbers, it does not give as much insight as the error analysis using the total differential.