$\newenvironment {prompt}{}{} \newcommand {\ungraded }{} \newcommand {\todo }{} \newcommand {\oiint }{{\large \bigcirc }\kern -1.56em\iint } \newcommand {\mooculus }{\textsf {\textbf {MOOC}\textnormal {\textsf {ULUS}}}} \newcommand {\npnoround }{\nprounddigits {-1}} \newcommand {\npnoroundexp }{\nproundexpdigits {-1}} \newcommand {\npunitcommand }{\ensuremath {\mathrm {#1}}} \newcommand {\RR }{\mathbb R} \newcommand {\R }{\mathbb R} \newcommand {\N }{\mathbb N} \newcommand {\Z }{\mathbb Z} \newcommand {\sagemath }{\textsf {SageMath}} \newcommand {\d }{\mathop {}\!d} \newcommand {\l }{\ell } \newcommand {\ddx }{\frac {d}{\d x}} \newcommand {\zeroOverZero }{\ensuremath {\boldsymbol {\tfrac {0}{0}}}} \newcommand {\inftyOverInfty }{\ensuremath {\boldsymbol {\tfrac {\infty }{\infty }}}} \newcommand {\zeroOverInfty }{\ensuremath {\boldsymbol {\tfrac {0}{\infty }}}} \newcommand {\zeroTimesInfty }{\ensuremath {\small \boldsymbol {0\cdot \infty }}} \newcommand {\inftyMinusInfty }{\ensuremath {\small \boldsymbol {\infty -\infty }}} \newcommand {\oneToInfty }{\ensuremath {\boldsymbol {1^\infty }}} \newcommand {\zeroToZero }{\ensuremath {\boldsymbol {0^0}}} \newcommand {\inftyToZero }{\ensuremath {\boldsymbol {\infty ^0}}} \newcommand {\numOverZero }{\ensuremath {\boldsymbol {\tfrac {\#}{0}}}} \newcommand {\dfn }{\textbf } \newcommand {\unit }{\mathop {}\!\mathrm } \newcommand {\eval }{\bigg [ #1 \bigg ]} \newcommand {\seq }{\left ( #1 \right )} \newcommand {\epsilon }{\varepsilon } \newcommand {\phi }{\varphi } \newcommand {\iff }{\Leftrightarrow } \DeclareMathOperator {\arccot }{arccot} \DeclareMathOperator {\arcsec }{arcsec} \DeclareMathOperator {\arccsc }{arccsc} \DeclareMathOperator {\si }{Si} \DeclareMathOperator {\scal }{scal} \DeclareMathOperator {\sign }{sign} \newcommand {\arrowvec }{{\overset {\rightharpoonup }{#1}}} \newcommand {\vec }{{\overset {\boldsymbol {\rightharpoonup }}{\mathbf {#1}}}\hspace {0in}} \newcommand {\point }{\left (#1\right )} \newcommand {\pt }{\mathbf {#1}} \newcommand {\Lim }{\lim _{\point {#1} \to \point {#2}}} \DeclareMathOperator {\proj }{\mathbf {proj}} \newcommand {\veci }{{\boldsymbol {\hat {\imath }}}} \newcommand {\vecj }{{\boldsymbol {\hat {\jmath }}}} \newcommand {\veck }{{\boldsymbol {\hat {k}}}} \newcommand {\vecl }{\vec {\boldsymbol {\l }}} \newcommand {\uvec }{\mathbf {\hat {#1}}} \newcommand {\utan }{\mathbf {\hat {t}}} \newcommand {\unormal }{\mathbf {\hat {n}}} \newcommand {\ubinormal }{\mathbf {\hat {b}}} \newcommand {\dotp }{\bullet } \newcommand {\cross }{\boldsymbol \times } \newcommand {\grad }{\boldsymbol \nabla } \newcommand {\divergence }{\grad \dotp } \newcommand {\curl }{\grad \cross } \newcommand {\lto }{\mathop {\longrightarrow \,}\limits } \newcommand {\bar }{\overline } \newcommand {\surfaceColor }{violet} \newcommand {\surfaceColorTwo }{redyellow} \newcommand {\sliceColor }{greenyellow} \newcommand {\vector }{\left \langle #1\right \rangle } \newcommand {\sectionOutcomes }{} \newcommand {\HyperFirstAtBeginDocument }{\AtBeginDocument }$

We use a method called “linear approximation” to estimate the value of a (complicated) function at a given point.

Given a function, a linear approximation is a fancy phrase for something you already know:

The line tangent to the graph of a function at a point is very close to the graph of the function near that point.

This tangent line is the graph of a linear function, called the linear approximation.

Note that the graph of $L$ is just the tangent line to the graph of $f$ at $x=a$.

A linear approximation of $f$ is a “good” approximation as long as $x$ is “not too far” from $a$. If one “zooms in” on the graph of $f$ sufficiently, then the graphs of $f$ and $L$ are nearly indistinguishable.

As a first example, we will see how linear approximations allow us to approximate “difficult” computations.

With modern calculators and computing software, it may not appear necessary to use linear approximations. In fact they are quite useful. In cases requiring an explicit numerical approximation, they allow us to get a quick rough estimate which can be used as a “reality check” on a more complex calculation. In some complex calculations involving functions, the linear approximation makes an otherwise intractable calculation possible, without serious loss of accuracy.

### Differentials

The graph of a function $f$ and the graph of $L$, the linear approximation of $f$ at $a$, are shown in the figure below. Also, two quantities, $\d x$ and $\d f$, and a point $P$ are marked in the figure. Look carefully at the figure when answering the questions below.

Select all the correct expressions for the quantity $\d x$.
You can see that $x=a+\d x$.
$\d x=f(x)-f(a)$ $\d x=f(x)-L(x)$ $\d x=x-a$ $\d x=L(x)-f(a)$ $\d x=L(x)-L(a)$
Select all the correct expressions for the quantity $\d f$.
You can see that $L(x)=f(a)+\d f$.
Recall: $L(a)=f(a)$.
$\d f=f(x)-f(a)$ $\d f=f(x)-L(x)$ $\d f=x-a$ $\d f=L(x)-f(a)$ $\d f=L(x)-L(a)$
Based on the figure and the expression for $L(x)$, select all the correct expressions for $\d f$.
Recall: $\d f=L(x)-f(a)=f(a)+f'(a)(x-a)-f(a)=f'(a)(x-a)=f'(a)\d x$.
$\d f=f'(a)(x-a)$ $\d f=f'(a)\d x$ $\d f=f'(x-a)$ $\d f=f'(x)\d x$ $\d f=f'(x)(x-a)$

So, we can write $\d f=f'(a)\d x$ and call it a differential of $f$ at $a$. Notice that we can define a differential at any point $x$ of the domain of $f$, provided that $f'(x)$ exists. We will do that in our next definition.

We should not be surprised, since the slope of the tangent line in the figure is $f'(x)$, and this slope is also given by $\frac {\d f }{\d x}$.

Essentially, differentials allow us to solve the problems presented in the previous examples from a slightly different point of view. Recall, when $h$ is near but not equal zero, Hence, We can replace a quantity $h$ with a quantity $\d x$ to write

Adding $f(x)$ to both sides we see or, equivalently There are contexts where the language of differentials is common. Here is the basic strategy:

We will repeat our previous examples using differentials.

The upshot is that linear approximations and differentials are simply two slightly different ways of doing the exact same thing.

### Error approximation

Differentials also help us estimate error in real life settings.

### New and old friends

You might be wondering, given a plot $y=f(x)$,

What’s the difference between $\Delta x$ and $\d x$? What about $\Delta y$ and $\d y$?

Regardless, it is now a pressing question. Here’s the deal: is the average rate of change of $y=f(x)$ with respect to $x$. On the other hand: is the instantaneous rate of change of $y=f(x)$ with respect to $x$. Essentially, $\Delta x$ and $\d x$ are the same type of thing, they are (usually small) changes in $x$. However, $\Delta y$ and $\d y$ are very different things.

• $\Delta y=f(x+\Delta x)-f(x)$; it is the change in $y=f(x)$ associated to $\Delta x$.
• $\d y=L(x+\d x)-L(x)$, it is the change in $y=L(x)$ associated to $\Delta x=\d x$. Note: $L(x+\d x)= f(x)+f'(x)\d x$.

So, the change

Suppose $f(x) = x^2$. If we are at the point $x=1$ and $\Delta x =\d x = 0.1$, what is $\Delta y$? What is $\d y$?
$\Delta y=f(1+\Delta x)-f(1)=f(1.1)-f(1)$
$\d y=f'(1)\cdot \d x=f'(1)\cdot 0.1$
Differentials can be confusing at first. However, when you master them, you will have a powerful tool at your disposal.